PROCESS IMPROVEMENT



PROCESS IMPROVEMENT

A

Resource

for

Healthcare

Prepared By:

Lee Elliott

and

Donald G. Sluti, Ph.D.

Contents

Preface

The Authors

1. Introduction

Section 1 SPC Tools: Determining How Well the Process

is Functioning

2. Understanding the Process

3. Hearing the Process

4. Understanding What the Process is Telling You

5. Delivering What the Customer Wants

Section 2 Process Improvement Tools: Making the Process

Work Better

6. Root Cause Analysis

7. Developing Potential Solutions

8. Implement and Follow-up

9. Team Dynamics

Section 3 Related Tools: Understanding and Refining the Process

10. Finding and Eliminating Wasted Effort

11. Failure Mode and Effects Analysis (FMEA)

12. Other Models of Process Improvement

13. Cost of Quality

14. Project Management

Suggested Reading

Chapter 1

Introduction

There are things happening in healthcare that we never dreamed of years ago. Medications are being produced that cure diseases that were thought to be incurable. Surgical procedures are being performed that repair anomalies that no one thought could be repaired. Anyone who has been in healthcare for any length of time has to be simply awestruck by the impact advances in technology have had on the quality of care provided to our patients. It is truly an exciting time.

At the same time, we feel burdened by the pressures put on us by such things as government regulations, insurance companies, or financial limits. These aspects of healthcare may not be so exciting.

One of the recent entrants into the healthcare arena can further enhance the quality of care we provide and also help us with some of the more burdensome aspects of our jobs. This involves defining more clearly what the “customer” wants and then consistently delivering — at a minimum —what the customer wants. In fact, the goal is to surprise and delight the customer with the care given or service provided. At the same time that this is being done, the work involved is done with a minimum of wasted effort and mistakes. Higher quality, higher customer satisfaction, less effort and fewer mistakes — these certainly warrant careful attention from those of us who work in healthcare.

The Joint Commission For Accreditation of Healthcare Organizations (JCAHO) provided a label for this recent entry: Continuous Quality Improvement (CQI). CQI is an exceptionally powerful approach to optimizing performance. By incorporating a management philosophy and implementing tools for understanding and improving the processes in the organization, levels of quality can be attained that are not attainable without CQI.

An example of the management philosophy typically required for CQI is as follows: 1

• People want to do good work.

• The person who does the job knows the most about that job.

• More can be accomplished by working collaboratively to improve the system than by working individually or by working around the system.

• People are motivated by meaningful feedback about problems to be solved, being involved in determining how to best solve problems and seeing whether the changes made had the desired effect.

• A structured approach to resolving problems using graphical techniques for feedback produces better solutions than an unstructured process (e.g., the ordinary group discussion that is used at most meetings).

• A cooperative, collaborative relationship between labor and management is significantly more effective than an adversarial relationship.

• Improving quality inevitably leads to reduced waste and re-work (i.e., doing it again because it was done incorrectly the first time) thereby increasing productivity.

In addition to a management philosophy, CQI in an organization is usually structured around a model. This provides structure to the performance improvement effort. A model that works well in healthcare was developed by Roey Kirk, a CQI consultant. Kirk’s model is attractive because it is similar to the scientific method that is familiar to many who work in healthcare. Her model is shown in Exhibit I-12.

Exhibit I-1

PERFORMANCE IMPROVEMENT ACTION PATH

Kirk’s model indicates that the CQI effort is started by clearly defining the problem to be addressed. Next, figure out what causes the problem. Third, develop a potential solution and try it out on a limited scale (i.e. conduct a pilot test). If it works in the limited test, then implement it fully. Monitor the process to make certain it improves as it should.

If the potential solution does not work in the testing phase, go back to a prior step in the model and continue to work until a solution is found that does work and continues to work. The performance improvement effort does not stop until the desired improvement in the process is clearly achieved — then, improve the process more.

There are some tools that are used to achieve optimal process improvement. These tools make it possible for the philosophy to be applied to the management of an organization and to the process improvement model.

The tools of CQI come in two categories. The first category is statistical process control (SPC). These tools will let you know if the process is consistently providing what the customer wants. This set of tools is discussed in Section One.

The other category of tools that make up CQI are used with groups of people —teams —to improve processes. A significant component of the CQI philosophy involves recognition that the people who are most knowledgeable about a process are those who are involved in the process. Moreover, given the complexity of many processes, it is not common that any one person knows all that is needed to improve the process. As such, it makes sense to bring together teams made up of people who are involved in the process and who, when considered as a group, know the entire process. The tools to be used with teams, the process improvement tools, are discussed in Section Two.

There are several “things” related to CQI that can be used to further enhance process improvement. These are discussed in the third section and include finding and eliminating wasted effort, ways to minimize the frequency of process failures and the impact of failures that

occur, other models that can be used in CQI, cost of quality, and tools for managing process improvement projects.

It is important to emphasize that the goal of CQI is to consistently provide, at a minimum, what the customer wants and to do so without the cost of waste and re-work. The organization that is successful is consistently meeting or exceeding customer desires. Those that don’t will go away (for more on this, see Quality Wars).3

EXERCISES

Chapter 1

1. Discuss the concept of quality. How has “quality,” as defined by the customer, changed over time.

2. How good is quality in healthcare? How do you know?

3. The definition of quality used here is, “consistently meets, and where possible,

exceeds customers’ expectations,” What are the implications of this definition for healthcare?

References:

1. GOAL/QPC (1988). A Pocket Guide of Tools For Continuous Improvement (pp. 45). Methuen, MA: GOAL/QPC.

2. Kirk, R. (1994). Quality Management and leadership skills. Training program presented at Saint Francis Medical Center, Grand Island, NE.

3. Main, J. (1994). Quality Wars. New York: Macmillan.

Section 1

SPC Tools:

Determining How Well the Process is Functioning

Continuous Quality Improvement (CQI) requires that we understand that all work gets done in processes; therefore, there is no work being done in healthcare that is not part of a process. As a consequence, if we are to improve the quality of our work, we must understand our processes.

The section begins with a brief overview of what a process is and some terminology needed to describe work done in a process. It also provides tools for describing a process – flowcharts. These tools are diagnostic in that they can clearly show where the process is flawed, unnecessarily complex, has improper work assignments, etc.

Measurement is discussed next. Processes are mute. The only “sounds” you’ll hear from a process are complaints from customers or comments made by those involved in working within the process. Unfortunately, it is well known that many customers just walk away without a sound. As such, a process could be seriously flawed without anyone being aware of it at all. In order to understand what is happening within a process, you have to give it a way to communicate. It has to have a voice. Measurement is the way to “communicate” directly with the process.

Once you give the process a voice, you need to have a way to make sense of what the process is telling you. There are two approaches. One is to take a snapshot of the data. You do this by developing a histogram. This will show you the distribution, or shape, of the data and how the data fits relative to the process specifications; i.e., performance expectations of the process. At a glance, the process will tell you if it can do what is expected of it. Histograms are only a snapshot. They allow you to look at the process at a single point in time. A second and more powerful tool available for understanding a process is to use a control chart. The processes we will be the most interested in are not one-shot events. Rather, we are interested in those that happen again and again. The power of a control chart is that it provides a way for the process to tell you how it is changing over time.

There is a catch. No process yields absolutely consistent results every time. Even the output of processes that are working as they should will vary. This is expected. This is referred to as “natural” or “common cause” variation.

However, there are times when something is happening to the process that changes it more than you would expect. When this happens, there is “assignable cause” or “special cause” variation indicating the process is not working as it should. A control chart is the device you use to separate natural variation (noise) that happens in a process that is working as it should be working from assignable cause variation (signal) that shows that something is wrong – the process is becoming ill.

Because control charts are so powerful, they will permit you to hear the process telling you that something is wrong very early in the illness. If you wait until you begin to hear customer complaints, the process already may be severely ill. Therefore, waiting to hear complaints before acting to correct the process will likely only serve to make the correction more complex, difficult and time consuming. Control charts can tell you that action is needed before you start hearing complaints; they encourage proactive management.

Can the process perform as expected? That is, is it operating within specifications? If so, the process is capable. However, it is important that you not look at the capability of a process if the process has assignable cause variation. If the process has such variation, it is not stable, it is out of control. As such, it is simply not predictable; anything can happen. It can’t consistently meet expectations. Therefore, before looking at capability, the process must be in control.

To do a capability study, someone has to decide what is acceptable performance. That is, you need to set the specifications. There is no single way to do this. You may, for example, contact other hospitals, look at relevant databases, or check into government standards. Somehow, based on research or experience, you need to decide what the top end of your specifications are, or “upper specification limit” (USL), the bottom end of your specifications, or the “lower specification limit” (LSL), and your “target, or nominal specification.”

Using specifications and a histogram, you will be able to establish how the process is performing at a single point in time. Using these specifications and a control chart, you will be able to see how the process is performing over a period of time.

Your goal for the process is for it to be in control and capable over time. That is, it should consistently deliver what the customer wants. But is that good enough? Is consistently performing as expected the ultimate goal? Should we adhere to the old adage “If it ain’t broke, don’t fix it?” The answer comes from the management philosophy of CQI: continuously strive to improve.

Processes that are not performing consistently (we say they are “not in control”) need to be improved to remove whatever is producing the assignable cause variation. Processes that are not capable need to be improved so they can meet expectations. Processes that are both in control and capable need to be improved so that they can yield even higher quality. In short, good enough just might not be good enough. And, “if it ain’t broke, don’t fix it” is an adage that is no longer relevant. The new adage is “If it ain’t broke, improve it.” Therefore, to accomplish the first step in the Kirk model, select processes that are not in control, not capable, or that are targeted for further improvement in the organization’s strategic plan for quality.

Chapter 2 Understanding the Process

What is a process?

The concept of a process is based on the notion that each of us receives input and we change that input in a manner that will meet the needs of someone or something. That is, a supplier, or vendor, provides us with inputs. We act on those inputs and provide outputs to a customer. In diagram format, the notion of a process looks like this:

There are a few terms that may help to make this idea of a process more understandable:

Supplier Whomever provides input into the process. They are the ones we depend on.

Our Actions This is what we do at work that is our reason for being there. It is

what some refer to as “value added.”

Customer Whomever receives the output of the process. The ones “downstream” from us at work. They are the ones who depend on us.

Immediate Customer The first to receive the output of the process.

Ultimate Customer The ones who receive the output after it has gone through all the processes it is to go through.

System The collection of processes that affect an output. For example, there are many processes involved in the system of admitting a patient.

It is important to understand that all of us are customers to someone else, we all act on the input that a supplier provides us, and we are a supplier to someone else. Moreover, we are all involved in many processes.

It also is important to grasp the idea of a “system.” Healthcare organizations are made up of many systems; most of these systems are made up of many processes. Our emphasis, for now, will be on getting optimal performance from the processes that make up healthcare systems. Our emphasis will be with processes, not systems, for two reasons:

1) It’s more understandable and it is more realistic to start at the process level. Roey Kirk refers to addressing system level problems as, “taking on world hunger.” 1 The old adage of,” don’t bite off more than you can chew” is relevant here. Taking on a system problem without first developing high performance processes is certainly, “biting off more than we can chew.”

2) Joseph Juran, one of the pioneers of this approach to quality enhancement, found that 90% of poor performance is caused by defective, or poorly performing, processes. 2 If you want optimal performance in your organization, fix the processes!

Taking a Look at the Process

It’s a good idea to begin by carefully examining the process targeted for improvement. One way to do so is simply to go out and take a walk through the process. For example, you can accompany a patient through the part of Admissions that has been targeted for improvement. Look at each of the steps involved, look for such things as complexity (anything that is not simple signals potential problems), wasted effort, or breakdowns.

As an alternative, you can bring together a group of people that are knowledgeable about the process and ask them to describe the process. If you skip a few chapters ahead to Chapter 9, you’ll find the discussion on teams helpful if this is the approach you decide to take. In fact, whatever approach you use to look at the process, it’s always a good idea to include people who are the most knowledgeable about the process when developing a description of the process to ensure that the description is accurate.

A third way of looking at a process is to use a workflow log. Have a clipboard with a piece of paper on it accompany the “input” received by the process throughout the process. Each person that comes in contact with the clipboard will note when they started to work on the input, what they did with it, and when they last had contact with the input. For those who like forms, it might look something like Exhibit II-1.

Exhibit II-1

Work Flow Analysis

Process: _________________________________________________

Starting point: ______________________________________

Ending point: _______________________________________

Return completed form to: ___________________________________

Start Action Taken* Time of Time Last Contact

Step 1 _________ __________________________ _____________

Step 2 _________ __________________________ _____________

Step 3 _________ __________________________ _____________

Step 4 _________ __________________________ _____________

Step 5 _________ __________________________ _____________

Step 6 _________ __________________________ _____________

Step 7 _________ __________________________ _____________

Step 8 _________ __________________________ _____________

Step 9 _________ __________________________ _____________

Include anything you did as part of this process. Include what you did if things went wrong.

Anytime you use a form like this you’ll want to provide some preliminary instruction to all involved so that it is used correctly and provides the information needed. Of course, it also should not provide documentation highly desirable to a plaintiff’s attorney (i.e., don’t do this without an attorney’s guidance if there is pending litigation).

Once you have taken a preliminary look at the process, it’s a good idea to draw a picture of it. That is, create one or more flowcharts. Flowcharts provide the sheet music for you to sing from in future efforts designed to improve the process. It is simply amazing how often a group of people begin to describe what is supposed to be a lone, single process only to find they are all describing different processes. Certainly, part of this is due to the phenomenon of describing accurately only the part of the process with which they come in contact with and then guessing about the rest. However, it is also due to the fact that most processes in healthcare were not carefully designed; they just evolved over time. As a result, few people totally understand any process. In fact, there are processes no one understands, no one approaches in the same way or, no one does their part the same way each time.

For example, consider the term STAT as it relates to requesting that medical supplies be delivered to a nursing care unit. Does STAT mean “I just want it right now,” “I need you to cover my poor planning so I’ve got to have it right now,” or “This is an emergency. It’s got to get here now”? If you are responsible for delivering supplies, what does the term STAT mean to you and how do you respond to a STAT order?

If you take a careful look at what we do in health care, you’ll find some of our processes are held together with Band-Aids (TM), chewing gum, hope, and a whole lot of good intentions. Years ago there was a cartoonist who specialized in drawing contraptions that were unbelievably complex but did very little. If this cartoonist, Rube Goldberg, were around today he’d be very impressed with what has happened in healthcare. If you doubt this, then help the folks in Patient Accounting explain a bill to one of our patients. Or, explain the policy for call pay to a new nurse. Or, explain the computer system to a physician not familiar with computers. Or…. oh, well, you get the idea. The moral of this little story is: don’t be surprised if you sometimes find it difficult to describe what really happens in your organization. When this occurs, the quiet clapping you hear in the background is that of Rube Goldberg. This complexity is the fault of no one, rather healthcare processes have evolved this way for many reasons.

Getting to know a process can be a challenge, but it needs to be done. Get to know the process with all of its blemishes and irregularities. Then, draw a picture of the process: make a flowchart.

Flowcharts

There are several different approaches to flowcharting a process. Before you start, be sure that you define the step that begins the process and the step that ends it. If you don’t do this, you’re studying the entire system that this process is part of and that can’t be done easily using these tools.

Top-down Flowcharts

These are the simplest and easiest to develop. They emphasize the major steps of a process.

1. First, list the most basic steps in the process being studied. Have no more than 6-7 steps.

2. List these steps across the top of a flip chart.

3. Below each step, list up to 6-7 sub-steps.

Consider the following example of the process involved in recruiting and hiring a new employee (see Exhibit ii-2).

Exhibit II-2

Top-down Flowchart for Recruiting and Hiring an Employee

Complete Recruit Screen Interview Hire

Personnel Applicants Applicants Applicants

Requisition

(PAF)

1.1 Fill in PAF 2.1 Review PAF 3.1 Compare 4.1 Prepare a list 5.1 Compare notes

for accuracy applicants’ of interview from interviews

1.2 Route to qualifications to questions

Human 2.2 Place ads in requirements 5.2 Select best

Resources magazines or 4.2 Interview each candidate

newspaper 3.2 Reject applicants qualified

not qualified applicant 5.3 Offer job to

2.3 Contact professional (take notes) candidate

recruiter if needed 3.3 Schedule

appointments for 5.4 Arrange pre-

interviews employment

physical, drug screen, etc. and

Detailed Flowcharts

Detailed flowcharts include extensive information about what happens at every stage in a process. They list all steps, including wasted effort.

Consider the following example of a detailed flowchart depicting the process of employee recruitment (see Exhibit II-3):

These kinds of flowcharts should be used sparingly because the detail they provide is unnecessary to understand the process. Moreover, it can take a long time—maybe weeks—to hammer out a detailed flowchart that is agreeable to everyone. Not only can this effort become discouraging to the team, it can provide a distorted picture of the “true” flow of work. (As previously stated, it is not uncommon to find multiple processes—everyone does it differently.)

Responsibility Flowcharts

A responsibility flowchart shows the steps in a process and who is responsible to perform each step. This kind of chart is helpful when flowcharting processes that cut across department lines (see Exhibit II-4).

Exhibit II-4

Responsibility Flowchart

Step Rene Jacob Olga Manuel

Plan the Everyone’

Report

Organize

Report

Write the

Report

Produce the

Report

Responsible for completion of step

Must be consulted before completion

Responsibility flowcharts can be combined with other kinds of flowcharts to show, for example, a more detailed flow of work along with who is responsible for each step. However, be careful not to try to accomplish too much or the flowchart can become confusing.

In this flowchart, those who are primarily responsible for a step have a box under their name by that step. Those with an oval must be consulted before going on to the next step.

Work Flow Diagram

This type of flowchart provides the most dynamic view of the way work gets done. It begins with a sketch of the work area. Then, someone records the movements of people, materials, documents, information, or whatever is the focus of study. The purpose is to look for excessive complexity or unnecessary movement. Usually, these will jump off the page (see Exhibit II-5).

Exhibit II-5

Workflow

A cook asked her assistant to obtain the items needed to complete 12 different recipes she prepared for each noon meal. She thought her assistant was taking too much time. Before confronting the assistant, she decided to develop a workflow chart to see if she could determine what the assistant was doing that slowed her down. She drew a picture of the assistant’s work area, then had the assistant draw each trip she took from one place to another during a one-hour period.

Freezer Papergoods Mop Receiving Food

Pantry Room Area Pantry

The cook immediately saw something was wrong. The assistant was making too many trips—way too many! It dawned on her that the problem was not with the assistant. Rather, she had been giving the work assignments to the assistant in a manner that led the assistant to go for each item in a recipe one at a time.

The cook sat down and prepared a list of items for all the items to be obtained from the freezer, a list to be obtained from the food pantry, and so forth. The next morning, the cook explained the new process to the assistant for gathering the items to be used in recipes. The cook gave a list of everything to be gathered from the freezer and showed the assistant several carts she could use when gathering items. Once everything was gathered from the freezer, the assistant was to get the list of items needed from the paper goods pantry, etc. The cook showed the assistant where all lists would be each morning. After trying it for a few days, the assistant again recorded the trips she took.

Freezer Papergoods Mop Receiving Food

Pantry Room Area Pantry

Salad

Cook’s Table Maker’s

Table

Stove

In fact, the assistant recorded all her trips, not just those she made in an hour, as she had before. The assistant had previously felt very rushed to complete her work. After the revised workflow was initiated, she no longer felt that way. Moreover, the cook had all the ingredients she needed in advance of the time they were needed. Rather than be critical of the assistant, the cook now complimented the assistant.

Chapter 2

1. Describe a process in which you are involved. Complete a workflow log on the process.

2. Using the workflow log you completed above, draw the following:

a. Top-down flowchart

b. Detailed flowchart

c. Work flow diagram

If you have enough information, draw a responsibilities flow chart.

References:

1. Kirk, R. (1994). Quality management and leadership skills. Training program presented at Saint Francis Medical Center, Grand Island, NE.

2. Main, J. (1994). Quality Wars. New York: Macmillan.

Chapter 3 Hearing the Process

Measurement

How do you know how well the process is working? You have to develop a way for the process to tell you how it is doing. You give the process a way to communicate with you by measuring the process. Much like taking a blood pressure on a patient, this measurement must be meaningful and easy to take. Moreover, it must be taken with adequate frequency so that you can catch changes in the process’ performance. To continue our analogy, few would feel comfortable at a hospital where the blood pressure of patients was checked only once every month or so.

It is not necessary to measure every output of a process. It is only important to measure the output that provides the most meaningful and useful information about the process. For example, if it is desirable for the people in Nutritional Services to provide consistent portions of food to their customers, maybe they could weigh each portion of food.

Also, it is important that the act of measurement requires only a reasonable amount of time and effort. If the people responsible for taking the measurement believe it takes too much time and/or effort, they might not do it. If they are forced to do it, they might not do a good job thereby corrupting the data.

Fortunately, most of the time it is not necessary to measure every output of the process. In fact, you may prefer to measure a sample of outputs from the process. Sampling will reduce time and effort of measurement without dramatically reducing the amount of confidence you can have in the analysis based on the data.

Operational Definition

In order to be sure your measurement is telling you how well the process is working, it is necessary to develop an “operational definition” of the measurement. Operational definitions help to clearly define what is being measured (see Exhibit III-1).

Components of an Operational Definition:

1. A measurement

2. The instrument to be used to take the measurement

3. The procedure for measuring

Each of these components must be clearly defined, understood, and consistently used by everyone collecting data.

Operational definitions are referred to as “indicators” or “measures.”

Exhibit III-1

In order to determine how quickly a specimen from Surgery can be evaluated by Pathology, you might develop an operational definition as follows:

Measure: The number of minutes to the nearest ½ minute from when the specimen leaves the Operation Room until the report is received and acknowledged by the surgeon.

Instrument: The clock in the surgery room.

Procedure: The scrub nurse records the time when the specimen leaves the surgery room and the time the surgeon acknowledges receipt of the report. The difference between the two times is the measurement

Types of Data

Measurement produces one of two types of data: “variable” or “attribute” data. Each of these types is described below.

Variable- Data that can be subdivided, often indefinitely (e.g. temperature, time, money).

Attribute- Data that consists of counts of observations or incidents falling into categories,

including rates or percentages (e.g., percentage of appointment no-shows, post-

operative infections). They cannot be subdivided (i.e., half an observation of an

occurrence wouldn’t make sense).

Of the two types of data, variable data is more informative for reasons that will become

obvious later.

Sampling

Sometimes it makes sense to gather data on every event that occurs in a process. In most cases in healthcare, it doesn’t make sense to do so. Whenever a process is high volume—and many in healthcare are—it may make more sense to sample the events. For example, if we decide to monitor noise levels in the Intensive Care Nursery, we could assign someone around the clock responsibility of monitoring the noise. However, it would be much less burdensome and expensive to periodically check the decibel level in the intensive care nursery to get a reasonable idea of noise levels around the clock.

An exception to this need for sampling may be when the measurement instrument interfaces with the computer software used for creating control charts (described later). There is no extra effort involved, it does not cost more and the software is designed to manage large volumes of data. Unfortunately, in most situations, the measurement instruments will not be able to talk to the software. As such, you will need to develop a plan for sampling.

There are several different methods for sampling:

• Simple random sampling—every event in the process has an equal chance of being sampled.

• Stratified sampling—the plan calls for groups, or strata, to be developed and a random sample selected from each group. For example, you may decide to take ten measurements during the night shift and ten during the day shift.

• Systematic sampling—periodically measuring an event. For example, you may wish to review every tenth medical record to measure the adequacy of patient education documentation.

• Cluster sampling—a typical group is selected, and a random sample is taken from that group. For example, Infection Control may look at Nursing Unit 4 to get an idea of how often a particular kind of infection is occurring on all nursing units.

• Judgment sampling—an expert provides opinion on which group to sample.

Select the method of sampling that will provide the information best suited to your needs at the lowest cost and with the least amount of error that can be tolerated. Some things to consider:

• Simple random and systematic samples often are easy to select but might not include all the groups that should be measured.

• Stratified, cluster and judgment samples will include the groups you think need to be involved, but they will require more time for you to identify the events to be measured.

• The appropriateness of judgment sampling depends on the accuracy of the expert’s opinion.

There are errors that you have to watch for when developing a sampling plan, when collecting the measurements, and when interpreting the data. The more error that is present, the more likely you will make inappropriate decisions when interpreting the data. The most common errors and potential ways to minimize these errors include:

• Bias—permitting your own viewpoint to creep into the sampling process, measurement, or interpretation. Minimize by: Periodically checking with those knowledgeable about the process.

• Non-comparable data— having a perception that the measurements are coming from a common group when, in fact, the groups are very different. For example, believing that the measurements that come from a single nursing unit are representative of all units may be incorrect. Minimize by: Being sure that those involved in the process review the sampling plan, examine the data, and participate in developing any conclusions that are reached based on the data.

• Uncritical projection of trends— the assumption that what happened in the past will happen in the future; it might not. Minimize by: Designing your sampling plan so that it will be sensitive to potential changes. For example, if you are tracking frequency of violent behavior by patients on one unit of a psychiatric hospital because that is where most of the incidents have happened in the past, you may not notice significant increases in violence on another unit.

• Causation—concluding that one thing causes another merely because they appear to be related. Minimize by: Checking your sampling to see if you are making such assumptions. Ask someone involved in the process to do likewise.

• Improper sampling— using a sampling method for gathering data that necessarily creates bias. Minimize by: Checking with several knowledgeable people before proceeding with your planned sampling. For example, using the internet to conduct surveys of past patients to determine satisfaction in geographic areas where many people don’t have computers.

• Sampling error—choosing a sample that does not adequately represent the total group. This is just bad luck. Minimize by: Choosing a larger sample.

A good review of sampling methods can be found in Evans and Lindsay. 1 They provide some very helpful information on sampling.

Develop a data collection plan. The plan will include the operational definition and sampling plan (see Exhibit III-2).

Exhibit III-2

Data Collection Plan—Unit 3

PROCESS: MEDICATION DOCUMENTATION

PROCESS OWNER: Sandra, Laurie, Deena, Department Director Beth

PROCESS SPONSOR: Karen

Why collect the data?

To ensure appropriate billing from medications administered and that medications administered are properly documented in each patient’s medical records.

What methods will be used for the analysis?

Pareto and p-chart.

What data will be collected?

Number of defective medical records

Number of defects on a medical record: (i.e., exclusions)

Were nurses’ signatures present?

Were all routine orders signed for?

Was it documented that IV was continued and that PCA was continued at the beginning of the shift?

Was the volume of the IV solution documented?

Were blood products given documented with start and finish times?

Does documentation support the reason for medication being held?

How will the data be measured?

The type and frequency of the exclusions related to the number of records audited. (# of defectives/ # of medical records)

When will the data be collected?

Day shift audits the prior night shift, and night shift audits the prior day shift.

Where will the data be collected?

Monitoring tool will be found on the clipboard of each patient.

Who will collect the data?

All nurses on the Unit are involved in this collection. Information gathered will be reviewed by the process owners named above.

Review the data collection plan before starting a process improvement effort. You might find a flaw in the way data is collected. That is, what appears to be a problem with the process might, in actuality, be a problem with the way data is being collected.

It is also important to ensure that those who are collecting the data follow this plan. If they don’t, the data may not be useful. Sometimes people just don’t understand the data collection plan. Other times, they understand the plan just fine but intentionally distort the data in an effort to look good. Either way, the data is inaccurate. Before you start a process improvement effort, check it out. Talk to the data collectors to determine if the apparent assignable cause variation is the result of flawed data collection.

Chapter 3

1. Develop an operational definition for a measurement of which you are

knowledgeable.

Some examples:

• A child’s temperature

• An elderly patient’s blood pressure

• Tastiness of food

• Timeliness of bill collection

2. How would you sample the process for which you developed an operational definition?

3. Using the information you have already developed for a measurement, complete a data collection plan.

References:

1. Evans, R & Lindsay, W. (1993). The Management and Control of Quality. (pp. 620-658). St. Paul: West.

Chapter 4 Understanding What the Process is Telling You

Histogram

Frequency histograms are excellent tools to access the voice of the process. They provide a “snapshot” of the process; that is, they show how the process is working at a point in time (see Exhibit IV-1).

Exhibit IV-1

Histogram

Acute care transfers from the ER

20

16

12

8

4

0

1 2 3 4 5 6

Number of transfers per month.

There is a protocol to use to develop histograms manually. By having everyone use this protocol, there can be a common understanding of what the histogram is saying.

The protocol is shown in Exhibit IV-2.

Exhibit IV-2

Constructing a Frequency Histogram

Step 1. Take measurements

Step 2. Draw a circle around the largest number in each column. Draw a box around the smallest number in each column.

Patient Waiting Times

33 41 17 22 6

29 12 44 16 30

49 37 38 28 48

15 20 29 27 16

37 22 32 27 36

41 18 14 41 19

18 17 21 35 13

5 15 42 28 7

20 26 39 12 23

26 22 24 33 30

Step 3. Draw a second circle around the largest of those numbers circled in Step 2.

This is the largest number overall.

Draw a second box around the smallest of those numbers boxed in

Step 2. This is the smallest number overall.

Patient Waiting Times

33 41 17 22 6

29 12 44 16 30

49 37 38 28 48

15 20 29 27 16

37 22 32 27 36

41 18 14 41 19

18 17 21 35 13

5 15 42 28 7

20 26 39 12 23

26 22 24 33 30

Step 4. Calculate the range (R).

Range = largest number – smallest number

49 - 5 = 44

Step 5. Establish the intervals.

This is a crucial step. The purpose of a histogram is to condense the information provided by the measurements so that it makes sense. If the information is condensed too much, you will lose too much information—the histogram won’t tell you much, if anything. If the information is not condensed enough, the histogram will provide so much information it may become meaningless.

Be sure to follow the guidelines for determining the appropriate number of intervals. If you do, you will obtain the number of intervals necessary to get the most information possible from your data.

Be sure that all intervals are the same size. If not, the histogram will be distorted (i.e.: the data is “fudged,” it’s misleading, and the graph developer is cheating).

Guidelines for determining

the appropriate number of intervals

Number of measurements Number of intervals

Fewer than 50 5 to 7

50 to 100 6 to 10

101 to 150 7 to 12

More than 150 10 to 12

In brief, based on the number of measurements you have collected, select the number of intervals that will provide you the most information.

For our example, let’s assume 8 intervals.

Step 6. Determine the size of interval.

Divide the range by the number of intervals you have selected.

44/8 = 5.5

Round the result so it will make the most sense.

For our example, the size of interval will be 6.

Step 7. Determine the boundaries

In just a couple of steps, you’re going to be asked to count how many measurements you have in each interval. Unfortunately, you can’t use the numbers at the end of each interval as a “boundary” to that interval. The reason is that any measurement that is equal to one of the numbers at the end of an interval will fall into two intervals.

For example, assume you decide your intervals will be 24-31, 31-38, 38-45, etc. What interval would 31 fall into, or 38? In fact, both of these numbers fall into two intervals.

To adjust for this, reset the boundaries you will use so that no measurement falls onto any boundaries.

For example, we only have whole numbers in our measurements. So, use decimals to set up your boundaries. The boundaries for the first interval of 4 to 10 will be

3.5 to 9.5.

When you count your measurements to see how many occur in each interval, use the boundaries to represent the interval. You will sacrifice a little precision by doing this, but you’ll avoid a great deal of confusion.

Step 8. Determine the midpoints of the intervals.

The purpose of the midpoint is to act as a label for the interval. It will not affect your counting. By determining the midpoints at this time, you can have all the information you need to start graphing once you have completed your count.

Step 9. Determine the frequencies.

Arrange a table to conduct your counts. Then count how many measurements are in each interval. (Remember: use the boundaries to represent each interval, not the actual interval.)

After you’ve completed your count, do it again. It is very easy to make a mistake when counting. It’s best to find the mistake now rather than after your histogram is complete and you’re attempting to make appropriate changes to the process.

Midpoint Interval Boundaries Tally Tally Frequency

Check

7 4-10 3.5-9.5 | | | | | | 3

13 10-16 9.5-15.5 | | | | | | | | | | 6 19 16-22 15.5-21.5 | | | | | | | | | | | | | | | | 10

25 22-28 21.5-27.5 | | | | | | | | | | | | | | | | 9

31 28-34 27.5-33.5 | | | | | | | | | | | | | | | 9

37 34-40 33.5-39.5 | | | | | | | | | | 6

43 40-46 39.5-45.5 | | | | | | | | 5

49 46-52 45.5-51.5 | | | | 2

Step 10. Prepare the histogram.

Draw the vertical axis on graph paper. Label it. Be sure to give enough room to get all of your data on the chart. It’s best to “rough out” your graph in pencil, then redo it in ink. You will always write the term “frequency” on the vertical axis of this type of histogram (it’s called a frequency histogram for a reason).

Mark the horizontal axis. Be sure you have enough room to put in all intervals. (For the chart, use intervals, not boundaries.) Using a pencil, mark the end of each interval. Label each interval by writing the midpoint of that interval below the horizontal axis in approximately the middle of that interval.

Label the horizontal axis. What measurements are you counting?

Title the histogram.

Patient Waiting Times

10

8

6

Frequency

4

2

0

7 13 19 25 31 37 43 49

Minutes

When preparing your histogram:

Be sure it is neat and easy to read.

Be sure it tells the story of the process—its purpose is to let you understand how the process is working at a single point in time.

Write a brief narrative of what the graph is saying. Most people can read a graph, but some can’t. This is just insurance that all will be able to read the graph.

Step 11. Once the histogram is complete, draw the specifications.

Patient Waiting Times

10

8

6

Frequency

4

2

0

7 13 19 25 31 37 43 49

X

Lower Specification Target Specification Upper Specification

20 40 60

Minutes

This example shows that outputs of the process are not normally distributed (i.e., they don’t fit a bell shaped curve): In addition, it shows that the process does produce consistent with specifications and the mean of the distribution is not on the target specification.

There are a number of measurements below the lower specification limit (LSL), and the mean (X) is well below the target specification (TS). This process is in trouble. It must be modified so X is closer to the TS; it also will require some reduction in variation so that all the measurements fall between the LSL and USL.

Commonly available computerized spreadsheet software are extremely useful in constructing and displaying histograms.

Understanding Control Charts

Some years ago, a professor at the Massachusetts Institute of Technology (MIT) by the name of Walter Shewhart began to play with the idea that a fairly run-of-the mill rule used in statistics could have significant implications for quality improvement. 1 The rule he was thinking about had to do with probability and was developed by a Russian by the name of Tchebycheff (pronounced Cheby-cheff). 2

In fact, this Russian fellow developed three related rules, all based on the relationship between a bell-shaped, or normal, curve and standard deviations. A standard deviation is a unit of variation about the mean, or average. In particular, where:

X- a single data point

X- the mean, or average, of the data

n- the number of data points

(- means the sum of

then, a standard deviation is

2

( (X- X )

n

What this says is to subtract the mean from each data point, square the result, and then add up all these squared results. Once that’s complete, divide this data total by the number of data points, take the square root of this number, and there you have it-- the standard deviation (sometimes referred to as “sigma” after the Greek letter “(” used to represent standard deviation).

We’re going to be talking about this relationship between a normal curve and its standard deviation by referring to Exhibit IV-3.

Exhibit IV-3

X = 100

O = 15



Rule 1 |———X———| 68%



Rule 2 |—————— X——————| 95.5%



Rule 3 |————————— X——————————| 99.7%

Distribution of Stanford-Binet IQ scores. From L.M. Terman

And M.A. Merrill (1937), Measuring Intelligence (Boston:Houghton Mifflin..

Tchebycheff’s rules:

Rule 1: Roughly 68% of the data will be located within one standard deviation on either side of

the mean.

Rule 2: About 95.5% of the data will be located within two standard deviations on either side of

the mean.

Rule 3: Approximate 99.7% of the data will be located within three standard deviations on either side of the mean.

Shewhart was only interested in Rule 3. Wheeler ingeniously showed why. 3 We all realize that the shape of the distribution of data often does not fit a normal curve; sometimes it isn’t even close. Knowing this, Wheeler looked at how well each of the three rules worked with a variety of distributions. Take a look at what he found.

Rule 1:

Exhibit IV-4

From D.J. Wheeler & S.J. Chambers (1992) Understanding Statistical Process Control

(2nd Ed.). Knoxville: SPC Press.

The number between the lines above each distribution in the exhibit shows the percentage of data within one standard deviation of either side of the mean (see Exhibit IV-4).

Obviously, Rule 1 is not accurate if the shape of the distribution is not a normal curve.

Let’s look at Rule 2:

Exhibit IV-5

From D.J. Wheeler & S.J. Chambers (1992). Understanding Statistical Process Control

(2nd Ed.). Knoxville: SPC Press.

Rule 2 works better, but is significantly inaccurate for the first distribution (see Exhibit IV-5).

Now look at Rule 3:

Exhibit IV-6

From D.J. Wheeler & S.J. Chambers (1992). Understanding Statistical Process Control

(2nd Ed.). Knoxville: SPC Press.

This is the only rule of the three that is relatively accurate regardless of the shape of the distribution of the data. In fact, no matter the shape of the distribution, Rule 3 is by far the most accurate (see Exhibit IV-6). For this reason, Shewhart considered Rule 3 most useful.

Based on Rule 3, what he concluded was that a data point that falls more than three standard deviations from the mean is a very unlikely occurrence regardless of the shape of the distribution. He said that if you have one or more data points this far from the mean, then its reasonable to conclude that the voice of the process is telling you that there is assignable cause variation, something is messing up the process and it needs to be fixed. The process is ill.

This characteristic of the relationship between means and standard deviations, Rule 3, is the basis for control charts. Many years of experience in a wide variety of organizations around the world have shown Rule 3 works extraordinarily well. It separates the “noise” of the natural variation of the measurements of the process that you would expect if the process is working as it should from a signal that is telling you something is wrong (i.e., the process needs to be fixed).

The general form of the control chart is depicted in Exhibit IV-7.

Exhibit IV-7

Time

A center line (CL) provides a measure of the central tendency of the data (e.g., the average, or mean). The custom in CQI is for the center line to be a solid line.

Control limits, upper (UCL) and lower (LCL), are calculated from the data and provide a means of distinguishing the types of variation present. It is the custom within CQI for control limits to be dashed lines.

By preparing a control chart, you will be able to see the average of the process and three standards deviations above the average (UCL) as well as three standard deviations below the average (LCL). You will see Tchebysheff’s Rule 3 at work. You will hear the voice of the process and understand what it is telling you.

Types of control charts

There are several types of control charts. Which chart is best to use will depend on the characteristics of what you are measuring:

• Is your data variable or attribute?

• Is the data based on a single measurement or is it based on a sample of measurements?

• Are you interested in whether a unit of work is defective or not, or are you interested in how many total defects you have in your work?

You get the idea. The answer to the question, “What control chart is best?” is: “It depends.” Fortunately, it is not really all that difficult to know which chart to use with a particular measurement. It’s usually fairly easy to determine. At the end of this section, a flowchart is provided that will guide you in your selection of control charts. First, let’s take a look at the different types of control charts so you’ll know what your choices are as you prepare to monitor and improve a process.

X & R Chart

The X & R charts are used to track variable measurement data that is gathered in subgroups and averaged. They are referred to as “X bar and R” charts, they are also called average and range charts (see Exhibit IV-8).

Note that when developing an X & R chart, collecting four or five data readings or more per sample is a good general rule. If possible, use a constant sample size: X & R control charts can be developed with varying subgroup sizes, but the calculations are more complicated.

It is helpful to have at least 25 averages before calculating control limits and interpreting the charts. Therefore, if the subgroup size is 4, then at least 100 measurements are required. If fewer data are available, preliminary control limits may be calculated with subsequent recalculation as more data are collected. The risk you run when calculating control limits with a small amount of data is that the charts won’t be very powerful. They won’t signal the process going out of control very quickly.

Exhibit IV-8

Constructing an X & R Chart

This chart can only be used with variable data.

Step 1. Complete the identifying information at the top of the chart.

Fill in the dates and times measurements are taken.

Step 2. Fill in the measurements as they are taken. Circle the largest, box the smallest. (It helps to use colored ink that is bright.)

Step 3. Add the measurements in each column. Write this in the box labeled SUM.

Divide each SUM by the number of sample measurements. Write this number in the box labeled AVERAGE, X.

Subtract the smaller number in each column from the largest number in each column. Write this number in the box labeled RANGE, R.

If something happened when taking measurements, write a number or letter in the box labeled NOTES. Write the same number or letter on separate sheet of paper and write whatever it is you wish to be sure to remember about this set of measurements. For example, “three employees were absent today. Dramatically slowed processing.”

Step 4. Calculate the overall average (X).

Total the sample averages (Xs) and divide by the number of samples.

Step 5. Calculate the average range (R).

Total the sample ranges and divide by the number of samples.

Step 6: Determine the scales for the graphs and plot the Xs and Rs.

Note there is a graph labeled AVERAGES. You will need to determine the scale to use on this graph. To do this, find the largest and smallest Xs. Be sure you will have enough room on the graph to include these two values and data points that go somewhat above or below these two values. Plot the Xs on this graph.

Now, do the same thing for the RANGES chart. Plot the Rs on this chart.

Step 7: Calculate the control limit for ranges (Rs).

Always start with the R chart. It’s the easiest to calculate and, if you find the process is not in control at this point, there is no reason to go any further (to review the guidelines for determining if a process is in control or not, see page ________).

At the end of this chapter is a table (see Exhibit IV-23). Look up D3 and D4, based on the sample size (n).

To calculate the upper control limit for R(UCLR) multiply D4 by R, or

UCLR = D4(R)

To calculate the lower control limit for R, multiply D3 by R or

LCLR = D3(R)

Draw R on the RANGES chart. Use a solid line.

Draw UCLR and LCLR on the RANGES chart. Use dashed lines.

Step 8. Interpret the RANGES chart (see guidelines on page ___).

If the Rs are in control, go on to the next step. If not, stop and fix the process.

Step 9. Calculate the control limits for averages (Xs).

Look up A2 in the table at the end of this section, based on the sample size (n).

Multiply Aby R, (A2R).

Now add that amount to X to get the upper control limit for X, or

UCLX = X + (A2R)

To get the lower control limit for X (LCLx), subtract (A2R) from X, or

LCLX = X – (A2R)

Step 10. Interpret the AVERAGES chart.

Follow the same guidelines as in Step 8.

This type of control chart is one of the most powerful available. It lets you look at the data from two perspectives—ranges and averages. Think of it as taking into consideration both a patient’s temperature and blood pressure.

Example (from ZONTEC ™, a software for SPC):

Note: A bold line is used by ZONTEC to show what data actually was included in the calculation of the

control limits. Solid lines are used both for control limits and CL.

XmR Chart

There are some interesting characteristics of the XmR chart. Typically, it is used with variable data. However, Wheeler argues that this type of chart also can be used with attribute data. In fact, he says that this is the chart to use when it is not possible to meet the assumptions that are required for each of the attribute charts.4

__

These charts are not as powerful as the X & R charts and, therefore, won’t detect changes in the process as quickly. However, they are more powerful than the attribute control charts (see Exhibit IV-9).

Exhibit IV-9

Constructing an XmR Chart

You will follow the same steps as you would with an X & R with a couple of exceptions:

1. Fill in the measurements on the first line of the SAMPLE MEASUREMENTS portion of the chart.

2. There are no SUMS or AVERAGES.

3. The range is a moving range (mR). With only one value per sample, it’s not possible to calculate the range within each sample. So, you will calculate the range between samples.

If the value of X for the first measurement is 27 and the value of the second is 32, then

32-27= 5

If the value of the third measurement is 41, then

41-32= 9

mR is calculated in this fashion after the second measurement is taken. There is no mR for the first measurement.

4. Calculate X, the average of the Xs (i.e., the data points). Calculate mR, the average of the mRs. Draw X & mR onto the appropriately labeled chart using a solid line.

5. Calculate the control limits for R.

UCLR=D4R

LCLR=D3R

If the mR chart is in control, calculate the control limit for X.

UCLX=X+(2.66R)

LCLX=X-(2.66R)

6. Recalculate the control limits five times: every 20 measurements until you have 100 measurements. This should be enough measurements to reduce the effect of sampling error (i.e., sometimes you will get weird results just because you happened to get a bizarre sample).

Example (from ZONTEC ™, a software for SPC):

p chart

This chart is based on a percentage: the number of defectives divided by the number of items sampled. That is, if a medical record for a patient included one error, the medical record has one defect and is also one defective medical record. If the medical record for a patient included ten errors, then the record has ten defects but is still only one defective. This kind of control chart is based on the number of defectives, not on the number of defects (see Exhibit IV-10).

Exhibit IV-10

Constructing a p Chart

To calculate p, divide the number of defectives by sample size. Multiply that amount by 100 (this last step changes that data from fractions to percentages), or

p=(number of defectives/sample size)x100

Sample size for a p chart can vary. However, it is necessary to re-calculate control limits any time the sample size changes.

Step 1. Complete the identifying information at the top of the chart.

Fill in date and time (if appropriate) that measurements are taken.

Step 2. Fill in sample size in NO. INS (i.e., number inspected) box.

Fill in number of defectives in NO. DE box.

Step 3. Fill in KINDS OF DEFECTS. This is the number of each kind of defect found in each sample. This information is not used in the p calculations but is useful later when trying to figure out how to improve the process.

Step 4. Calculate p. Divide NO. DE by NO. INS for each sample.

Step 5. Calculate p, the average percent defective for the process.

p= (total number of defectives/total number of units inspected)x100

Note--this is not the same as calculating the average of the p values.

Step 6. Determine the scales for the graph and plot the p values. Use a solid line to draw the p.

Step 7. Calculate the control limits.

UCLP=p+3

LCLP=p-3

In words, subtract p from 100. Multiply that amount by p. Divide this amount by the sample size. Take the square root of all this. Multiply the result by 3. Now you have:

3

Add this amount to p to get UCLP. Subtract this amount from p to get LCLP.

LCLP cannot be less than 0. If the calculation comes out less than 0, then LCLP equals 0. (You can’t have less than 0%)

Step 8. Interpret the chart (see page ____).

Assumptions that must be met to use a p chart:

1) A sample of distinct outputs is periodically taken from the process.

2) Each output can be clearly classified as defective or not.

3) The measurement tracked and evaluated on the control chart is the percentage of defective items.

4) Whether a particular output is defective or not will not have any effect on whether the next output of the process is defective or.

5) When the process is in control, the probability of each output being defective is the same.

Use of a p chart does not require equal sample sizes. However, the control limits will change every time the sample size changes. The result will be a control chart that is a little more difficult to interpret and may be rather confusing (see Exhibit IV-11).

Exhibit IV-11

np Chart

The np chart is used to track the number of defectives resulting from a process. When using the np chart, the number of items observed must be constant. This may be possible where the output volume of the process stays about the same, or where sub-groups of constant size are deliberately taken from the process (see Exhibit IV-12).

Exhibit IV-12

Constructing an np Chart

This chart is based on the number of defectives (see p chart for explanation of the difference between defects and defectives).

Every sample size must be the same - no exceptions.

Step 1. Start the same as you would with a p chart. Cross off the % DE since it won’t be used. Note np is the number entered in the NO. DE box.

Step 2. Calculate np, the average number defective in the process.

np=total number of defectives divided by number of samples

Note--Divide the total of the column labeled NO. DE by the number of samples. Do not divide by the total of NO. INS row.

Draw np onto the graph using a solid line.

Step 3. Calculate the control limits for np.

UCLnp= np+3 np (1 – np / n)

LCLnp= np-3 np (1 – np/n)

LCLnp cannot be less than 0. If the calculations comes out less than 0, then reset LCLnp to 0 (you can’t have an average number of defectives less than 0).

In words, divide np by the sample size. Subtract that amount from 1. Then, multiply that result by np. Take the square root. Multiply by 3. Now you have:

3 np (1 – np/n)

Add this amount to np to get UCLnp. Subtract this amount from np to get

LCLnp. Convert LCLnp to 0 if the result of the calculation comes out less

than 0.

Step 4. Interpret the chart (see page ___).

Example (from ZONTEC ™, a software for SPC):

A requirement that must be met in order to chart counts rather than percents is that the area of opportunity must not change. For example, if three people don’t show up for appointment one day in which there were five appointments and three didn’t show the next day out of fifteen appointments, what would be learned by charting just the number of no-shows in this situation? It would be meaningless and would lead you to making poor decisions. It would only make sense to chart the number of no-shows if the number of appointments was constant day-to-day.

Assumptions that must be met to use a np chart:

1) A sample of distinct outputs is periodically taken from the process.

2) Each output can be clearly classified as defective or not.

3) The measurement tracked and evaluated on the control chart is the number of defective outputs.

4) Whether an output is defective or not will not have any effect on whether the next output of the process is defective.

5) The sample size must be consistent.

6) When the process is in control, the probability of each output being defective is the same.

c Chart

The c chart is based on a count of the number of defects produced by a process when the area of opportunity is constant from subgroup to subgroup. The area of opportunity (i.e., roughly, the number of items inspected) usually includes both a time and place (see Exhibit IV-13).

Exhibit IV-13

Constructing a c chart

This chart is based on a count of the number of defects (c)—not the number of defectives. See the discussion on the p chart for the definitions of these terms.

Step 1. Complete the identifying information at the top of the chart.

Fill in date and time (if appropriate) that measurements are taken.

Step 2. Put number of defects in NO. DEF. box: Fill in kind of defects.

Step 3. Calculate with the average number of defects for the process.

c = total number of defects / number of samples

Step 4. Determine the scales for the graph and plot the c values. Use a solid line to draw

the c.

Step 5. Calculate the control limits.

UCLc = c + 3 c

LCLc = c –3 c

LCLc cannot be less than 0. If the calculation comes out less than 0, then set LCLc at 0.

In words, take the square root of c. Multiply by 3. Now you have:

3 c

Add this amount to c to get UCLc. Subtract this amount from c to get LCLc, which cannot be less than 0.

Step 6. Interpret the chart (see page_______).

Example (from ZONTEC ™, a software for SPC):

Assumptions that must be met to use a c chart:

1) The defects are rare events.

2) Only one type of defect is being measured.

3) The area of opportunity for defects to occur is relatively constant. For example, tracking the

amount of turnover on a nursing unit can involve a count of people who leave employment

each month if the budgeted number or positions on that unit change very little over time. As

such, you can use a c chart for this situation. If the number of budgeted positions does

change significantly over time, a p chart is more appropriate.

u Chart

The u chart tracks the rate of defects occurring in processes. It is used in lieu of the c chart when the area of opportunity cannot be kept the same from subgroup to subgroup. Like the p chart, the control limits for the u chart will have to be calculated for each subgroup. This makes the interpretation of the chart a bit more difficult (see Exhibit IV-14).

Exhibit IV-14

Constructing a u chart

This chart is used when counting the number of defects and the area of opportunity changes.

Step. 1: Complete the identifying information at the top of the chart. Fill in the

dates and time measurements are taken.

Step. 2: Collect the data. Record the number of defects and the sample size.

Step 3: Calculate and record the defect rate (u) for the sample.

u = c

n

Where: c = number of defects in the sample.

n = sample size

Step 4: Calculate the average defect rate (u).

_

u = ( c

n

That is, add up all the defects and divide that total by all the items

examined.

Draw u on the control chart as a solid line.

Step 5: Calculate the upper and lower control limits for each “u”. (Remember: control limits change every time n changes.)

_ _ _

UCLu = u+3 u / n

_

LCLu = u – 3 u / n

Draw UCLu and LCLu on the chart.

Step 6: Interpret the chart (see page ______ on how to interpret control charts).

Assumptions that must be met to use a u chart:

1) The defects are rare events.

2) Defects are independent; that is, the occurrence of one defect has no effect on whether or not another will occur.

Selecting the appropriate control chart

The control chart to use with a particular set of data depends on the kind of data involved. The flowchart depicted in Exhibit IV-15 can be used to select the appropriate control chart.

Interpreting a Control Chart

Western Electric developed some rules to use when interpreting control charts: 5

Rule 1: A process is out of control whenever one or more data points is outside the (three

standard deviation) control limits (see Exhibit IV-16).

(Insert Exhibit IV-16 about here)

Rule 2: A process is out of control whenever at least two out of three consecutive data points fall

on the same side of and are more than two standard deviations away from the central line

(e.g., X) (see Exhibit IV-17).

(Insert Exhibit IV-17 about here)

Rule 3: A process is out of control whenever at least four out of five consecutive data points fall

on the same side of, and more than one standard deviations away from the central line

(see Exhibit IV-18)

(Insert Exhibit IV-18 about here)

Rule 4: A process is out of control whenever at least eight consecutive data points fall on the

same side of the central line (see Exhibit IV-19).

(Insert Exhibit IV-19 about here)

Other rules that have been used include:

TREND RULE: A process is out of control if eight or more consecutive data points trend up or

down. Two points of equal value do not break a trend. However, it’s important to watch

for trends even when there aren’t eight points in a row. Consider Exhibit IV-20.

(Insert Exhibit IV-20 about here)

PATTERN RULE: A process is out of control any time there is a discernable pattern in the data

(e.g., the data conforms to a cycle) (ee Exhibit IV-21).

(Insert Exhibit IV-21 about here)

HUGGING RULE: A process is out of control any time 15 or more consecutive data points are

within one standard deviation of the central line. This only applies to X & R and XmR

charts (see Exhibit IV-22).

(Insert Exhibit IV-22 about here)

JCAHO has stated that they will use the Western Electric rules. Some healthcare organizations may use all of the rules listed above. This will provide somewhat tighter reins on quality – you’re more likely to see a problem with predictability of a process than you would if you only used the Western Electric rules.

Chapter 4

1. Look at the following histograms. What do they tell you about the process?

2. Why did Shewhart consider Rules 1 and 2 less useful than Rule 3?

__

3. Using the following data, develop an X and R chart.

Sample 1 2 3 4 5 6 7 8 9 10

7 8 6 8 10 7 8 10 5 4 10 8 5 9 5 4 7 9 3 9

6 3 6 6 10 8 5 4 7 5

3 4 6 3 4 7 9 3 4 8

5 5 7 9 8 9 3 5 8 7

Is it in control? How do you know? What rules are violated, if any?

4. Develop a XmR chart for the following data:

38

32

39

33

37

33

38

36

40

39

35

32

33

34

37

Is it in control? How do you know? What rules are violated, if any?

(Insert Exhibit IV-23 about here)

References:

1. Shewhart, W. (1980) Economic Control of Quality Manufactured Product. Milwaukee: American Society For Quality Control.

2. Wheeler, D. (1995) Advanced Topics in Statistical Process Control. (pp. 115-120) Knoxville: SPC Press.

3. Wheeler, D. & Chambers, D. (1992). Understanding Statistical Process Control. (pp. 62-64). Knoxville: SPC Press.

4. Wheeler, D. & Chambers, D. (1992). Understanding Statistical Process Control. (pp. 257-260). Knoxville: SPC Press.

5. Western Electric (1985). Statistical Quality Control Handbook. Indianapolis: AT& T.

Chapter 5 Delivering What the Customer Wants

Capability

Control charts will tell you very effectively if a process is stable and, thus, predictable. What they won’t tell you is if the services being provided by the process are any good. It’s entirely possible that the process is consistently providing lousy service – predictable, but lousy.

At this point, you have removed assignable cause variation and the process is in control, but you can’t say yet if the services provided meet customer expectations or not. To find out the latter, you need to do a capability study. To do so, you must first ensure that the process is in control. An unpredictable process is, by definition, not capable of consistently producing desirable results--it’s unpredictable!

Once it’s in control, only then can you proceed to determining the degree to which the process is capable. The approach you take depends on the type of data. First, let’s look at capability studies with variable data.

Variable Data

Develop a histogram. Draw lines representing expected performance (we’ll refer to these lines as upper (USL) and lower specification limits (LSL), or specs). At a glance, you’ll know if you’ve got problems (see Exhibit V-1).

(Insert Exhibit V-1 about here)

Next, calculate Cpk, the process capability. This simple statistic will tell you how close the mean of the distribution is to the nearest specification limit (see Exhibit V-2). It is the ratio of customer expectations to the normal variation of the process.

(Insert Exhibit V-2 about here)

This will tell you at a glance whether the process is capable or not. If Cpk is 1.0 or greater, it’s capable. If it is less than 1.0, the process is not capable of consistently producing results that meet expectations (or ,“are within specs,” however you like to say it).

What isn’t so easy is Cpk can vary over time: What you get today for Cpk for some processes may not be what you get tomorrow. Fortunately, we have the tool available to determine if the Cpk is changing so much over time that you should begin to believe that there might be a problem--use a control chart.

Just drop your Cpks into a XmR chart and watch the magic. Once there are 40 or more values on this chart, you can interpret the Cpk data with a significant degree of certainty. You will know the amount of uncertainty that is inherent in your Cpks for the process. (Just for clarification, this is not the same thing as evaluating the variation in the process.) This information will help you keep from becoming unnecessarily distressed by changes in your Cpk. It will also tell you when it’s appropriate to be distressed.

There is one other feature of Cpk you need to be aware of that will help with interpretation in some situations. Suppose the LCL is a certain value (e.g.,0) and you wish to obtain measurements from the process that are as close to that value as possible. The nature of Cpk is such that the closer you come to obtaining all measures close to the LSL, the smaller Cpk will become. When a situation such as this occurs, you must use a different formula to calculate Cpk. The formula to use is as follows:

__

USL – X

—————

3 (()

__

That is, for a “one-tailed test”, subtract X from USL and divide the result by three times the standard deviation. This formula will correct for the fact that you are actually working to move the process mean as close as possible to one of the specifications. Since this is a common situation in healthcare, it will be a formula you will use very often.

Attribute Data

Capability ratios, such as Cpk, are based on the likelihood of an individual output meeting the specifications. An output either is in specification (spec) or it isn’t. Counting, which provides the basis for attribute data, necessarily involves more than one output. For example, if the specs indicate that there can be no more than 3 defectives per 100 outputs, it becomes clear that this does not involve an individual output-- it involves 100. As such, calculating Cpk on attribute data makes no sense.

To assess the capability of attribute data, you only need to know the specifications and the appropriate control chart. If the control limits are within the specifications, the process is capable. If it’s helpful to do so, you can use the average of the control chart (e.g., p) to represent the capability of a chart based on attribute data.

In his book, Beyond Capability Confusion, 1 Wheeler does an excellent job explaining why Cpk doesn’t make sense for attribute data as well as other issues related to Cpk. Please see that source for further explanation.

For our purpose, it will be sufficient to note that specs for attribute data need to be very clear. If, for example, the spec is, “the average percentage will be 5% or less,” that is quite different than, “no single percentage point can be greater than 5%.” Confusion will lead to unnecessary problems.

In short, once the specs are clearly defined for attribute data, the process is operating within specs or it isn’t. No calculations are necessary.

How Does This Differ From What We Used To Do

In the past (and many still do it this way), we would submit a measurement once every three to six months to the Quality Management department. The single data point was OK or not. Anything within specifications was good enough.

The approach Shewhart advocated is that consistently being within specifications isn’t good enough. The ultimate goal now is for continuous quality improvement (CQI) until all measurements fall exactly on the target spec. Comparison of the two approaches to quality management has produced irrefutable results. When done properly, CQI always out-performs the old approach. (As an interesting aside, for-profit companies using CQI out-perform companies on the stock market that do not use CQI). 2

It’s easy to say CQI is a better approach, but a more important thing to understand is why. Again, let’s refer to Wheeler. 3 He argues that every process is in one of four states: the ideal state, the threshold state, the brink of chaos, or the state of chaos. Furthermore, processes are not static, they change from state to state.

Processes in the ideal state are in control and are capable. CQI for processes in this state involves moving the average of the data closer to the target specification and reducing variation. Perfection (which should not be expected but you may be able to get amazingly close) is when all measurements fall exactly on the target specification.

Processes that are in control but are producing some results outside of specifications are in a threshold state. In the “old” approach, management would likely introduce 100% inspection and/or try to find out who is responsible. Within CQI, management knows the only way to move processes from the threshold state to the ideal state is by changing the process.

Processes that are not stable but are producing results within specifications 100% of the time are on the brink of chaos. Management operating within the “old” approach will think everything is fine – it isn’t. Assignable causes will continue to change the process until results outside of specs will result. A process out of control eventually will produce unacceptable results, and it’s not possible to guess when this will happen. The only way out of this state is to eliminate the assignable causes.

When the process is out of control and not capable, it is in a state of chaos. Nothing is predictable. No matter what the manger tries, nothing works for long because the process is always changing. You will need to determine the assignable causes and eliminate them, then alter the process to make it capable.

In the “old” approach, chaos managers were assigned to bring the process out of the state of chaos. Once that was done, the manager went to work on another process in chaos (ever hear a manager say, “All I do is put out fires”?) When the manager moves on, the process begins to deteriorate again.

What complicates all this is that processes don’t remain in one state. There is a tendency for processes to move toward the state of chaos. Wear and tear, employee turnover, equipment breakdown, failures, etc., all combine to weaken the performance of a process until it slides into the state of chaos.

The only way to consistently produce high quality is to use the methods advocated by Shewhart: control charts and capability studies. Moreover, this is the only way to achieve optimal performance and to stay there.

Chapter 5



1. Using the X and R chart you developed in Chapter 4, calculate Cpk.

USL = 10

LSL = 4

Is the process capable? If so, how can you tell? If not, why not?

2. Develop a histogram of the data used in #1. Does it show the process to be

capable?

3. Why develop a histogram when assessing capability? Doesn’t Cpk tell you all you need to know?

4. Describe how to tell if a process is capable if the measurement provides attribute data?

References:

1. Wheeler, D. (1999). Beyond Capability Confusion. (pp. 64-65). Knoxville: SPC Press.

2. October 18, (1993). Betting to Win on the Baldie Winners. Business Week, p. 8.

3. Wheeler, D. (1999). Classics From Donald J Wheeler. (pp. 85-88). Knoxville: SPC Press.

Section 2:

Process Improvement Tools:

Making the Process Work Better

Once you have a process defined and measured, and the measurements placed in an appropriate control chart, you are ready to find out what is going on in the process. That is, you can figure out why the process is behaving as it is. This is referred to as root cause analysis.

Sometimes it is possible for one or two people to determine what is causing the process to not perform as desired. When this happens, then most definitely the process should be changed accordingly. If the changes work as desired, great. However, there are times when it is not possible for one or two people to figure out why the process isn’t working. When this happens, it is best to bring together a team of subject matter experts (SMEs). Usually, SMEs are the people who actually do the work. To make such teams work, it’s best to have a team leader—the person ultimately responsible to ensure that the process improves as it should. In addition, a team will work best if there is a facilitator—the person responsible for using CQI tools so that the team is most likely to make the needed changes. The team leader is responsible for improving the process; the facilitator is responsible for coordinating the efforts of the team members so they can improve the process—but the facilitator is not responsible for actually improving the process.

Once a root cause analysis has been completed—the second step in the Kirk model—the next step is to find a solution that is most likely to result in process improvement as desired. As with the prior step, this can be done by individuals or by a team.

The last step in the Kirk model is to implement the solution that is most likely to lead to the desired process improvement. Once implemented, the measurements of the process need to be monitored very carefully to see if the change occurs as desired. If it does, then celebrate! If not, then go back to the prior steps in the Kirk model to see where things went wrong.

If at any time during the process improvement effort the decision is made to use a team, there are some tools that will be needed to ensure the team is successful. Teams can be very complex and, at times, frustrating. However, if you understand the dynamics of a group of people working together as a team and can use a few CQI tools that were designed to enhance team performance, you typically will find the team will successfully improve the process.

Chapter 6 Root Cause Analysis

Now that you have heard the voice of the process, you know whether or not there is a problem. If there is a problem – the process is not in control and/or is not capable – it needs to be fixed so that the process will begin to operate in the ideal state. Alternatively, you may find the process is already in the ideal state but there is opportunity for further improvement. The process may, for example, be one of the processes targeted in the organization’s strategic quality plan. In that case processes definitely should be improved.

Regardless of the reason for improving a process, the first step in this improvement effort will be to find out what is keeping the process from operating at its optimum. This step in process improvement is referred to as root cause analysis.

The tools available to conduct a root cause analysis range from simple (e.g., fishbone) to moderately complex (e.g., experimentation). To begin, the facilitator and team leader work together to select the best tools to use to find the root cause of the problem in the least amount of time. Of course, the facilitator may need to vary from this plan as he or she believes necessary.

As a result of this step in the process improvement effort, the team of subject matter experts (SMEs) assigned to improve the process will be fairly certain that they know the reason the process is not working as it could.

Now, let’s look at the tools available for root cause analysis.

Idea Generation

Brainstorming

Brainstorming is a technique that can be very useful in conducting a root cause analysis. It is simply a structured approach to get ideas as to why the process is not working as it could.1 It works best when there is a need for a lot of ideas about what might be the root cause and the team is not likely to have significant conflict. In these situations, this simple tool can be very effective (see Exhibit VI-1).

To conduct a brainstorm with SMEs:

1. Write the problem onto a flipchart. Make sure everyone understands the problem.

2. Each person takes a turn and expresses one idea of what might be causing the problem. If a person cannot think of an idea, he/she says “pass.” It is important to keep the process moving and not permit the participants to make evaluative remarks, raise eyebrows, or otherwise suggest they do or do not support any idea given. Such evaluation will damage the brainstorming process.

3. The person facilitating the meeting writes ideas on a flipchart as they are given by team members. Be sure to write down all ideas and don’t alter the wording without asking permission of the person who came up with the idea.

4. Once it appears that most team members have run out of ideas, the facilitator can then throw it open: anyone can give an idea as soon as he/she comes up with it; there is no need to wait his/her turn.

5. Normally, it may be worth allowing a few days for further thought. Then, bring the group back together to brainstorm further. This “time away” from the problem can lead to increased creativity.

6. Ask if anyone has any questions about any of the ideas. Clarify. Ask if any of the ideas can be combined because they essentially say the same thing. Note that the people who gave the ideas have the final say on whether to combine ideas or not.

(Insert Exhibit VI-1 about here)

Fishbone - Cause and Effect Diagram

A fishbone diagram is a graphic display of the probable causes of assignable cause variation in a process (see Exhibit VI-2). This tool can be used to organize the results of a brainstorming session.

1. Start by writing down the problem to the far right of a flipchart. Draw a box around the

problem and draw a straight line from the box to the left side of the flipchart. Be sure

everyone understands the problem and how fishboning works.

2. Draw diagonal lines from the line previously drawn. Each line represents a category of sources of variation. The most frequently used categories are “environment, equipment, method, supplies, and people.” However, the actual categories used are left to the judgment of the team.

3. Brainstorm possible sources of variation. The facilitator notes the ideas in the

appropriate category. Of course, the team determines what category is appropriate for each

idea.

(Insert Exhibit VI-2 about here)

Nominal Group Technique (NGT)

Sometimes the participants don’t know each other, the idea generation effort has or is likely to have significant conflict, or there is need to be very thorough in generating high quality ideas. When any of these conditions are present, the tool to use is nominal group technique (NGT). This tool is the workhorse of this category of tools.

1. Start by writing the problem on a flipchart. Clarify the problem for the participants.

2. Ask the participants to remain quiet for several minutes and to use the time to write down as many ideas as they can as to possible causes of the problem.

3. After everyone is done, go around the table and ask each participant to give one idea. Write down each idea on a flipchart. As with brainstorming, there can be no evaluation.

4. Ask if anyone has any questions about any of the ideas. Clarify. Ask if any of the ideas can be combined because they essentially say the same thing. Note that the people who gave the ideas have the final say on whether to combine ideas or not.

Wallstorming

This is a variation of the prior techniques. It is useful when there is little time for a meeting and/or participants may need a lot of time to think about the problem.

1. Write the problem on a flipchart. Make sure everyone understands it.

2. Ask the participants to write down any ideas they have on the flipchart as they get time. Let them know how long the flipchart will be available. Leave the flipchart up for anyone to note ideas, possibly leave it available for several days.

3. Meet to clarify and combine ideas, as appropriate.

The Relationship Diagram

There are processes in which cause-and-effect relationships are so complex that a simple tool like fishboning is not helpful. One tool that can be used in these situations is a relationship diagram (see Exhibit VI-3).

1. Write the desired outcome of the process in the center of a sheet of paper. Draw a double circle around it.

2. In the space surrounding the outcome, write the actions that must happen in order for the outcome to be attained. (Note: in order for a statement to be an action, it must contain a verb). Draw a single circle around each action.

3. Consider the relationship between all the actions. If one action causes another, draw an arrow from the cause to the effect.

(Insert Exhibit VI-3 about here)

The Tree Diagram

Sometimes graphically displaying the logic of the process will provide the greatest understanding. When this happens, the tool to use is a tree diagram (see Exhibit VI-4).

1. Write the outcome of the process to the left on a sheet of paper. Draw a box around it.

2. Write actions taken and decisions made in the order that they occur starting from the left of the paper. Draw lines to connect the actions and decisions. The lines represent “logic branches” in the tree.

(Insert Exhibit VI-4 about here)

Selecting a probable root cause

At this point, you likely have several ideas as to what the root cause might be. In fact, you might have a very large number of ideas. You need to select the most likely root cause.

There are several tools available to do this.

Affinity Diagram

On occasion, it may be easier to select the most probable root cause of the problem if the potential root causes were clustered using some logic. A truly extraordinary tool for accomplishing this is the affinity diagram. The team provides the logic for the clusters of ideas, places each idea in its appropriate cluster and does so with very little discussion. When used in conjunction with NGT, it can be very effective at focusing a high conflict group on the problem at hand.

1. Once the list of ideas has been generated and clarified, participants write each idea onto a sticky note. Be sure to caution the participants to write legibly.

2. Tape several blank pieces of flipchart paper on the walls in a fairly large room. Leave plenty of space between sheets (it’s good to have 1-2 feet between sheets).

3. Ask the participants to place all their sticky notes on one or more of the sheets of flipchart paper. It doesn’t matter what sheet they put the sticky notes on at this point.

4. Ask the participants to take the sticky notes and move them from one sheet to another until all the ideas on a given sheet somehow “fit together.” This is to be done in silence.

5. On a single sheet of flipchart paper, write the words “Parking lot.” Any idea on which the group cannot agree is to be placed on the parking lot sheet for discussion once the group has completed the remainder of the exercise.

6. Ask someone to confer with the other members of the group and come up with a label for each cluster of ideas.

7. Conduct a discussion with the group about the appropriateness of the clusters. Then discuss with them what to do with the parking lot issues.

This tool helps the group members structure their thinking as they prepare to select the probable root cause.

Multivoting

This tool helps the team prioritize the ideas of potential root causes so that the root cause that the group thinks is most likely to be the true root cause is first, the second most likely is second, etc. This rank ordering is helpful when deciding what root cause to fix first and, if that doesn’t work, what to do second, etc.

1. Once the ideas have been generated, clarified and combined by the team, number all of the ideas. The numbers are only used to identify each idea and can be helpful when discussing the ideas.

2. Ask the group to discuss the ideas to get an understanding of what the others are thinking is the most probable root cause. Lobbying for one or more ideas is acceptable as long as no one becomes domineering.

3. Give each team member the same number of colored sticky dots. The number of dots should vary with the number of ideas. If there is a large number of ideas, the facilitator may wish to give out as many as 15 dots. If the number of ideas is small (e.g., 20), then give out, maybe, five dots.

4. If the ideas have been written on flipchart paper that has been taped to the wall, give the team members the opportunity to walk around and look at the ideas.

5. After returning to their seats, the team members vote on the ideas they think are the most likely root causes. They do this by writing the number used to identify the idea on a sticky dot. What is different about multivoting than other forms of voting is each member can vote more than once for an idea, usually up to some maximum number of votes per idea. For example, the person facilitating the meeting may indicate that he/she has decided to distribute seven dots to each participant. He/she then might say that there is a maximum of four votes for any one idea.

6. After everyone has decided how they want to vote, the participants put the dots representing their votes by the idea they want to vote for.

7. Add up all the votes. Discuss the top 3-5 ideas. Reach agreement on how to proceed.

Variations on Multivoting

If time is important and there is little likelihood the participants won’t change their votes when they see how others vote, there are a couple of shortcuts.

One is to simply have the participants vote by using a marker to place one or more hash marks by the ideas for which they are voting. This can speed things up some.

Alternatively, the participants can get at most two votes per idea. Then the person facilitating the meeting can have them raise two, one, or no hands as he/she reads through the ideas. Count the votes.

Consensus voting

Once in a while, getting acceptance by everyone is critical. When this is the situation, the tool to use to select the root cause is consensus voting. After multivoting to reduce the number of ideas to be considered to a few, maybe 4-5, then turn to consensus voting.

1. Give everyone three note cards—one green, one yellow and one red.

2. As the facilitator reads through the ideas, have everyone hold up one card. If a participant

really likes the idea, he/she should hold up the green card. If it’s OK but nothing special,

he/she should hold up a yellow card. If he/she can’t accept the idea, he/she should hold up a

red card.

3. If the facilitator notes that there are one or more red cards, then the effort should

stop and the team should discuss the idea and negotiate changes to be made to the

idea so that it is, at least, OK for everyone.

Data Analysis

At this point, the number of potential root causes should be narrowed to a manageable few. If it is still not clear what the most likely root cause is or if solving the problem is critical, very time consuming, and/or expensive, then you may wish to do an objective assessment to further increase the likelihood of having found the true root cause before moving on to the next step. If you recall, the next step is finding a solution to the problem. If you don’t have the true root cause determined when you start the next step, then the next step and all others that follow will only serve to show that you were not successful in finding the true root cause in this step. As such, a little extra caution is in order at this point.

There are several tools you can use to more objectively assess the accuracy of your root cause analysis.

Pareto Chart

A pareto chart is a special form of bar chart. It’s based on the notion that 80% of the trouble comes from 20% of the problems. This 80/20 rule may or may not be true, but it gives a starting point for analysis.

Pareto charts can be used in a variety of different ways. One is to look at frequency of occurrence by potential root cause. For example, one could count the number of patient falls that happen shortly after the patient has showered, the number of falls that happen when patients are getting out of bed, etc.

It is simply a modified frequency histogram. The tallest bar is to the left, the next tallest is second, and so forth. It will look something like the Exhibit VI-5.

(Insert Exhibit VI-5 about here)

Generally, a good rule to use is if the tallest bar includes 60% or more of the data, then you can be pretty comfortable that you have the root cause. If not, you have a “flat pareto” and have not found the root cause.

That doesn’t mean you have to stop here. You may use a different measure. You might want to look at severity of injury by root cause. You might want to try a new perspective. Try number of falls by day of the week (see Exhibit VI-6). Maybe that will tell you something.

(Insert Exhibit VI-6 about here)

Even if you do find what appears to be a root cause, you might wish to look further: that is, break the root cause into its component parts. If you find the patients who are most likely to fall are elderly, it may be worth looking at the number of falls by type of medication the patient is taking at the time of the fall. Such assessments can break apart a preliminary cause and find the true root cause.

It’s worth noting that some will use a cumulative percentage chart as part of a pareto. This can be used to see what percent each bar on the chart contributes to the total.

(Insert Exhibit VI-7 about here)

Scatter Plots

There are times you may want to see if there is a relationship between two variables. You might, for example, want to see if there is a relationship between the number of patient complaints and staffing in the Nursing Department. To begin, you will plot the data (see Exhibit VI-8).

(Insert Exhibit VI-8 about here)

The pattern suggests that the more nurses there are at work, the more patient complaints there are.

(Insert Exhibit VI-9 about here)

This pattern indicates just the opposite, the number of complaints goes up as staffing goes down (see Exhibit VI-9).

It’s not possible to conclude that one thing causes another from a scatter plot, but if there is no relationship you can conclude one thing does not cause the other. Like pareto charts, if a relationship looks like it exists, this tool can increase your confidence that you have found the true root cause.

Experiments

By intentionally varying the level or amount of whatever you think is the root cause and monitoring changes in the process measurement, you can be most confident that you have found the root cause. There are a variety of experimental designs that you can use.

If you wish to do anything more than a basic analysis, check with a statistician. These studies can become complex.

Chapter 6

1. Describe a process problem that you are familiar with. Brainstorm possible causes. Now set it aside for several hours. Return to brainstorm more possible causes.

2. What does the following Pareto chart tell you?

Needlesticks

50

40

30

Frequency

20

10

0

10am 7am 11am 8am 9am 12am 6am

Time of day

Needlesticks

50

40

30

Frequency

20

10

0

12 7 8 6 3 other ages

Age of pediatric patient

3. Do you believe there is a relationship between time required for nurses to respond to call lights and patient satisfaction at Ajax General Hospital?

The scatter plot of the measurements is as follows:

Patient Satisfaction

Time required for nurses

to respond to call lights

What can you conclude from this scatter plot about the relationship?

References:

1. Osborn, A. (1963). Applied Imagination. New York: Scribner’s.

Chapter 7 Developing Potential Solutions

At this point, you have carefully described the process, identified that a problem exists, and have found the root cause of the problem. Now it is time to come up with one or more solutions that will eliminate the root cause and move the process closer to the ideal state of being in control and capable.

You can certainly use any of the idea generation tools introduced in the last section. Just modify the tools so they are focused on finding solutions to the problem. That, along with any of the voting tools, may be sufficient. However, there are several more tools available to you to find the best solutions.

Criteria Grid

By using predetermined criteria as to what constitutes a good solution, the group can significantly reduce the number of potential solutions to consider.

1. Use an idea generating tool to come up with a list of criteria the group might use to select the best potential solution. Use one of the voting tools to reduce the list to somewhere around 4-6 criteria.

2. Ask the group to set the appropriate scoring (i.e., yes/no, high/med/low, or 1-5) for each criteria. A voting tool can be used, if needed.

3. Draw the criteria grid (see Exhibit VII-1).

(Insert Exhibit VII-1 about here)

4. Ask the group to evaluate the potential solutions on each criterion. Enter the results in the appropriate square.

5. Select the potential solution with the highest overall evaluation. This can be determined by circling the highest rating in each column. Then total the number of circles in each row. The potential solution with the highest number of circles has the highest overall evaluation.

6. Discuss it with the group to ensure credibility (see Exhibit VII-2).

(Insert Exhibit VII-2 about here)

Cost-Benefit Analysis

Obviously, the goal of this step of process improvement is to come up with the most appropriate solution for the least cost. The criteria grid will help with the first part, a cost-benefit analysis will help with the second.

1. Begin by noting the prepared solution and the period of time included in the analysis.

2. List all tangible costs and the dollar value of each. Do the same for tangible benefits.

3. List intangible costs and benefits.

4. Divide the total dollar value of costs by the total dollar value of benefits. The result is the cost-benefit ratio. If the ratio is more than 1.0, then the costs outweigh the benefits in the time period assessed. If the ratio is less than 1.0, benefits exceed costs.

5. Compare the results of this cost-benefit analysis with the result of other cost-benefit analyses. The solution with the smallestratio is most financially attractive. However, intangible costs and benefits also should be considered. A form that can be used for cost-benefit analysis and an example (see Exhibit VII-3).

(Insert Exhibit VII-3 about here)

Contingency Plan

Another approach to developing potential solutions that can be helpful is contingency planning. By having participants list all the ways that the plan or goal for improving the process potentially won’t work, they provide a foundation for reducing the likelihood of failure (see Exhibit VII-4). Contingency planning can be helpful in preparing for the next tool: action planning.

1. Review the improvement goal and/or plan with the group. Clarify.

2. Brainstorm ways the goal and/or plan can fail or get worse.

3. Brainstorm ways to minimize each potential cause of failure.

4. Use a selection technique to choose the best way to overcome each potential cause of failure.

5. Review the results with the group after a plan has been developed for each potential cause of failure. Modify the plan as needed. Put it into an action plan format.

(Insert Exhibit VII-4 about here)

Action Planning

Once the potential solution has been identified, the next thing the group may decide to do is to develop a plan for implementing the potential solution. The action planning tool can be very helpful for providing necessary detail (see Exhibit VII-5).

1. Define expected results, how the results will be accomplished, and what resources will be needed.

2. Define how the results will be measured. (Usually, this will simply involve watching changes in the control charts.)

3. Once the first and second steps have been completed, go back and establish completion dates and assign responsibilities for successful and timely completion of each action.

(Insert Exhibit VII-5 about here)

Pilot Study

Once a potential solution has been selected, it may be advisable to try it out on a small scale; that is, conduct a pilot study. This is especially true if there is a high cost associated with implementing a solution or with failure of the solution.

Watch the impact of the “small scale” implementation on the process measurement. Does it change in the direction you need? Is the size of the change large enough to make you comfortable that you have found a solution that will be effective at producing the change you need? If so, then implement the potential solution throughout the organization. If not, then you may need to go back and select another potential solution using the information you gained by conducting the pilot study. Alternatively, you may need to look again at the root cause analysis. Given the results of the pilot study, are you still confident you’ve found the root cause?

Chapter 7

1. What criteria would you use to select a solution to the process problem you

identified in Chapter 6 (#1).

a. Develop a criteria grid.

b. Brainstorm a few potential solutions to the problem.

c. Use the criteria grid to select the most likely solutions to resolve the problem.

2. Using the solution you just selected, develop a contingency plan for its implementation. Now develop an action plan.

Chapter 8 Implement and Follow-up

Once you have reached a point that you are confident that you have the solution(s) that will result in process improvement, then implement the solution. Then, watch what happens.

The control chart should tell if the solution is working. The measurement should begin to show movement in the direction you want. Over time, the measurement should change to where you want it to be. If it doesn’t change or doesn’t change as much as you need it to, then the process needs further improvement.

It is important to understand that the control chart likely will show that the process is going out of control if you are effectively changing the process as you need. That is not only acceptable, it is desirable.

Once the process has stabilized again, you may wish to re-calculate the control limits. There are no hard and fast rules about when to re-calculate control limits. Wheeler provides some general guidelines.1 He suggests re-calculating the control limits if all of the following conditions are present:

1. The data are significantly different than in the past.

2. The reason for the change is known.

3. The change in the data is desirable.

4. The change in the process that resulted in the data changing is expected to continue.

It is important that you only re-calculate control limits when you have enough data. If you do re-calculate with small amounts of data, the control limits should be regarded as temporary. Once you have 50-100 measurements, the control limits aren’t likely to change much with the addition of more data unless the process changes significantly.

Once the process has been improved, there’s always a question of why you would want to go through the effort of continuing to monitor a process that is in control and capable. After all, it is fixed isn’t it? There is no need to monitor a process that is in control and capable, provided you never experience equipment failure, hire new employees, only purchase supplies that are totally consistent, etc. If you can’t be certain that nothing will adversely impact the process, then you will need to check the process periodically.

Remember, the natural state for a process is to be out of control and not capable – the state of chaos. It is only through the use of CQI tools that you can get the process in control and capable – the ideal state. Moreover, it is through the continued use of these tools that you can ensure the process remains in the ideal state once you get it there. How often you check the process depends on how important the process is. If it is not especially important, then don’t check it often. If it is very important, check it frequently.

Defining Success

You now know when you should re-calculate control limits after you have implemented your solutions. You also are aware that you have to monitor the process once it is in the ideal state if you are going to keep it that way. However, once you’ve done all this—you’re not done yet. You need to improve the process even more. That’s the nature of continuous quality improvement. But, doesn’t this suggest that you will be working to improve the process for eternity? You’re a busy person. You’ve got other things to do. When is good enough, good enough?

Fortunately, there is an answer to when you’ve done enough. In the old quality assurance (QA) approach to quality, good enough was defined as when the measurement was within the upper and lower specifications or standards. This has been referred to as “goal post” quality. The reason for this becomes clear when you look at a graphic depiction of QA:

LSL USL

Taguchi examined the cost of this approach. He found that outputs with measurements closer to the specification limits (i.e., the USL and LSL) don’t truly work as well as those with measurements that are closer to the target specification (i.e., the most desirable measurement---that which will yield the most desirable result). As a result, those items produced that are closer to the specification limits are the most expensive of those produced. You can see the goal post approach to quality—the QA model—is expensive because it potentially produces so much “near junk.” A depiction of the Taguchi loss function makes this more obvious (see Exhibit VIII-1).

(Insert exhibit VIII-1 about here)

The area under the curve represents cost. It’s obvious that the least cost incurred for an output results when the measurement of that output matches the target specification. The greatest cost for an output occurs when the measurement approaches the specification limits (LSL or USL).

You can see from the chart that Taguchi demonstrated that it’s not enough for outputs to be within specifications if your desire is to minimize cost. He showed that the minimum cost for a process occurs when all outputs are exactly equal to the target specification. This would suggest the ultimate goal for process improvement is to get all processes to the point that all measurements are equal to the target specification.

Actually, the answer to the question of when you should consider stopping your efforts to improve a process is more straightforward: stop improving a process when any savings you will realize are less than what you’d realize by improving other key processes. Of course, as you improve one process to the ideal state, you will have more time available because you won’t be using the time for the process that you previously had used to repair process problems. CQI is a way to give yourself more time.

There is another way to determine when you should stop your performance improvement efforts on a process—look at the strategic plan for quality. To get the most from CQI, it is important to have a strategic plan for quality. Determine what key processes need to be improved each year, provide the resources for improving those processes, plan for continued monitoring of the improved processes, and plan for what happens next. Include performance goals in your plan for a process that, once attained, will indicate the improvement effort for that process has been successful and efforts should switch to monitoring to keep the process in the ideal state.

Chapter 8

1. Discuss the advance to quality that Taguchi introduced. How did it change the way we look at quality?

2. Describe a process that you have seen deteriorate into chaos.

References:

1. Wheeler, D. (1999). Classics From Donald J. Wheeler. (pp.49-51). Knoxville: SPC Press.

Chapter 9 Team Dynamics

Many times, the person responsible for the process will be able to make the needed changes to the process by changing job assignments, altering the manner in which work within the process is done, or holding discussions with other subject matter experts. Anytime a process can be altered in this fashion, it is likely a good idea to do so. However, there are other times when the person responsible for the process lacks the knowledge, ability, or authority necessary to determine what changes to the process need to be made and to make them. Moreover, there are times when it is necessary for the subject matter experts (SMEs) to buy-in to the process changes needed so they will be actively supportive as the changes occur. In situations such as these, there may be a need to pull together a team of SMEs.

Team Leader

There are two roles that will determine the effectiveness of a team. The first role critical to the effectiveness of the process improvement effort is the team leader. The responsibilities of the team leader are as follows:

1. Be sure s/he understands the problem.

2. Obtain any necessary authorizations to create a team to address the problem, select SMEs to be on the team, and get authorization from the SMEs’ supervisors for the SMEs to participate on the team.

3. Find a trained facilitator to assist with the process and meet with the facilitator to clarify how the two will work together to address the problem.

4. Arrange and schedule team meetings. Keep all records, data, and otherwise document the team’s progress.

5. Participate as a team member. Attend all meetings, carry out assignments between meetings, etc. Since the purpose of calling a team together is to get the input from other SMEs, the team leader needs to encourage participation of the other SMEs.

6. Report to Quality Management (QM) and the Performance Improvement Steering Committee (PISC) as needed.

7. Implement the changes suggested by the team. Monitor the effect that the changes have on the process. Keep the team members informed. If the changes are successful, be sure the process continues to be monitored in order to “maintain the gain.” If not successful, call the team together and continue to work until the process is improved as needed.

8. Someone needs to be assigned by the team leader to record the major points of the team’s discussion. Ideally, this will be someone who is not part of the team. However, if need be, it can be a team member. If team members act as recorders, usually the team leader will rotate the assignment of recorder from meeting to meeting. It is difficult to accurately record the discussion and to make a meaningful contribution to the discussion. Therefore, rotating the assignment of recorder has the effect of not totally removing a SME from the discussion.

Facilitator

The other key role to a team is that of the facilitator. If it can be said that the team leader owns the problem, then the facilitator owns the manner in which the team works to address the problem. The duties of a facilitator are as follows:

1. Meet with the team leader to discuss and gain an understanding of the process, the problem that has been identified and to develop a plan to be used to conduct the team meetings.

2. Get all supplies needed to facilitate the meetings such as a flipchart, markers, etc.

3. Facilitate the meetings of the team so that the team understands the problem, selects the probable root cause(s), develops potential solutions, effectively implements the process changes, monitors the results, etc.

4. Throughout, the facilitator must maintain neutrality about which solution is chosen and the way it is implemented. As such, it is important that the facilitator not be a SME.

5. Meet with the team leader between meetings of the SMEs to evaluate the meeting process, plan for improvements, and address problems.

Both the team leader and the facilitator need to be knowledgeable about the tools of quality improvement. Both should be actively involved in teaching the CQI tools to the SMEs at appropriate times during the meetings.

At each meeting, it may be desirable to ask someone to be the timekeeper primarily to remind the facilitator of any time limits the team may set. Often, the team leader will keep time.

Conducting Team Meetings

Getting Started

Once the team leader has selected the team members and has done the necessary introductory work with the facilitator, it’s time to begin the meetings of the team. Initially, it is important to make certain all the team members know each other. This can be handled in a variety of ways.

The team members can simply introduce themselves and tell what their role is in the process under study. Alternatively, the team leader and/or facilitator can introduce the team members. Other times, introductions can be used along with an “ice breaker,” which is a simple team process used to reduce the initial tension some team members may experience at the first meeting (see Exhibit IX-1).

(Insert Exhibit IX-1 about here)

The facilitator works with the group to clarify the objectives for the group, review everyones role, provides an overview of the plan that has been worked out by the team leader and facilitator and go over any ground rules the team will follow.

In an effort to speed up a team’s work, a “default” set of ground rules is used that each team will follow, such as:

• Attend every meeting, unless it is simply not possible.

• Accept assignments for work to be done between meetings and complete the assignments on time.

• Arrive on time for meetings.

• Assume responsibility to ensure the team works effectively to improve the process.

• Be courteous.

Effective Meetings

The relationships and communications of teams are very complex. In addition to the work to be done by the group, there are interpersonal undercurrents. Sometimes these interpersonal undercurrents become so forceful that they disturb the effectiveness of the team. As such, it is important the facilitator carefully manage these dynamics.

Peter Scholtes has identified four stages that teams seem to progress through:1

1. Forming

• The team members may be somewhat excited about being selected for the team, concerned about what they are getting into, or relieved that the problem is finally being addressed.

• They start a transition from individuals to members of a team.

• Little, if anything, is accomplished during this stage because there is so much going on that distracts the team members from the team’s objectives.

2. Storming

• At this point, the team begins to realize that the task may be difficult, it might involve a significant time commitment and/or some hard work and they don’t clearly see how all the work will get done.

• Members can become irritable, recalcitrant, and argumentative and may seem to want off the team. Others may become overly zealous and want to jump to developing solutions.

• This step is the most difficult for most teams.

3. Norming

• Team members begin to understand how the work will get done and what their roles will be.

• They begin to focus their energies on the process improvement objectives.

• Relationships become more friendly and cohesive.

• Discussions begin to reflect common goals, not the goals of individuals.

4. Performing

• By now, the team members have settled into their roles and have developed a common understanding of what the group is to achieve.

• Team members begin to become satisfied with the process and develop an attachment to the team.

• Work is now being done to address the objectives of the team.

The duration and intensity of each stage varies from team to team. It may take 2-3 meetings to reach the performing stage or it may take months. It is important to understand that working through these stages is normal. Moreover, it is important to understand that the team eventually will reach the fourth stage and productive work will begin. Don’t become overly concerned; the initial disfunctionality of a team is normal.

Along with the stages a team will pass through, the team’s mood will vary from meeting to meeting. As progress is made, the team’s outlook will brighten and the mood will be positive. As work seems to stall, members may seem bored. If they discover an error in their work, find a potential solution didn’t work as they expected, etc., the team often will become frustrated.

It is important to understand that the stages and moods of a team are normal components of the team process. Stages are relatively predictable; moods aren’t as predictable. It is important that the facilitator and team leader not become so involved in these team dynamics that process improvement gets derailed. As Scholtes says, it’s best to accept these characteristics of a group with an attitude of “this, too, shall pass.” Meaningful process improvement will get done, but sometimes, depending on the group dynamics, it may not be obvious for a while.

Characteristics for Success

There are some things a team can do to increase its likelihood of success:2

1. Get the right members on the team. Success is less likely if the SMEs selected for the team don’t have the knowledge of the process necessary to effectively improve it. Cover all the bases, don’t leave any aspect of the process unknown to the team.

2. Be sure everyone understands why the team was formed. Misunderstandings early on will seriously damage the work of the team. You do not want to hear, after several meetings, a team member say, “Oh, I didn’t understand that is what we were to work on. I thought we were supposed to….”

3. The team members all need to accept responsibility for improvement of the process and to recognize the necessity of everyone’s involvement. They need to communicate clearly, engage in behavior that advances the work of the team, encourage participation by others and otherwise work to accomplish balanced participation, and accept their role in the group.

Team members can be very helpful and enhance the effort of the group if they do such things as:

1. Ask others for their opinions.

2. Ask questions to clarify what others have said and put at least as much effort into understanding others as they put into getting the team to understand themselves.

3. Periodically summarize the discussion to that point.

4. Keep the discussion focused. Avoid digression.

5. Be fair when giving praise or correcting others.

6. Help ease tensions when the group is going through difficult times.

7. Frequently refer to data, don’t just give opinions. Look for evidence to support an opinion.

In general, everyone on the team needs to understand that the process improvement effort is going to take place in an area that is complex, dynamic, challenging at times, rewarding at other times, but ultimately the work will get done. They should neither over-react nor under-react to the behavior of others. They should accept every problem as a group problem. And, they should prevent problems whenever possible.

Common Problems

There are several problems that Scholtes identified as fairly common when working with teams.3

1. Floundering

The team may wonder what to do next from time to time. Early on, this is typical. It is a common part of the group process. Later on, it may suggest the team doesn’t have a clear plan or that some team members don’t truly support the team’s decision and are making it difficult for the group to move forward.

To address floundering, the facilitator may need to review the plan the team developed to improve the process. S/he may ask the group if there is any unfinished business or if someone sees a potential problem that needs to be addressed.

2. Overbearing and domineering team members

Some team members try to control the decisions of the group. They may try to convince the group that they know more about the process than anyone else. They also may spend a disproportionate amount of time talking thereby dominating the discussion.

Those most likely to become over-bearing are in positions of authority in the organization. They are accustomed to giving directions to others and are not familiar with working as part of a team.

Its best to prevent this problem by talking with the individuals before the next meeting about the need for getting everyones input in a decision since no one has a complete understanding of the problem (if they did, there would be no reason to have a team). Teams are based on the notion that “all of us are smarter than one of us.” Be sure there is acceptance of the idea that titles are not relevant in an effective team and that if titles do become relevant, the team will be damaged.

If the facilitator sees a member becoming overbearing, he/she may direct the discussions to others on the team. The facilitator may wish to review relevant data to reduce the perception that opinions are facts. In a team, everyone is entitled to an opinion but no one’s opinion is necessarily any better than anyone else’s. After all, opinions are only informed guesses. Data trumps everything else.

3. Rush to accomplishment

From time to time, you’ll encounter members who just want to get it done. They may see themselves as “take charge types” who get things done. They don’t recognize how wrong the adage of “don’t just sit there, do something” is. In this context, rushing to solutions is high risk. It most likely will result in the team coming up with solutions for a problem that doesn’t exist, assuming the team holds together until they generate solutions. In short, the probability of getting the problem solved using this approach is only slightly better than dumb luck.

This is something that can be prevented by informing the team early on that no effort will be exerted by the team to generate solutions until there is agreement as to the root cause. Make that part of the plan. It may be necessary to remind the group of this by noting solutions that are proposed prematurely on a sheet labeled “parking lot” and remind the group that it will be discussed later after the root cause analysis is complete.

3. Personalizing problems

It’s common for team members who have a disagreement to see it as being caused by a personality characteristic of the other person. Sometimes it is, but not usually. This perception can lead the members down a totally unproductive path. They begin to address problems that have nothing to do with the root cause of the process problem. In fact, if this results in hostility or mistrust, it will impair problem resolution.

The facilitator can redirect the team members to address the process problem by asking appropriate questions. “I understand that you are saying that the process has a problem with… Can you tell me more about that?” William Ury has called this reframing and has provided an excellent discussion of this technique.4

4. Straying

It is not common that a discussion will stay on a single topic for more than a few minutes. If you have the opportunity to listen to a group of people visiting, you will notice that the number of changes of topics that occur in a relatively short time span is significant. The facilitator needs to be ready for this and be prepared to bring the team discussion repeatedly back to the objectives of the team.

Pointing to the most recent items noted on the flipchart and asking a relevant question works quite well. Summarizing what has been said to that point, what Ury calls “rewinding the tape,” also works well. Asking the team, “What should we discuss next?”, is a gentle reminder that the group has strayed.

5. Feuds

It is rare, but it does happen, that a team will include members that were in serious conflict before they were asked to be on the team. This should be avoided whenever possible. The feud can easily spread to other members and can overwhelm the team so that there is little chance that the objectives will be met.

The team leader needs to be careful when selecting SMEs to avoid feuds. This is by far the best way to deal with feuds: keep them away from the team. However, if the team does end up with feuding members, the team leader and facilitator may need to visit individually with the feuding parties and develop a plan to make it possible for the people to work together effectively until the objectives are met. If this doesn’t work, the team leader and facilitator may ask the feuding parties to participate “off-line:” the facilitator and team leader will meet with each party individually between team meetings to get their input. Do not kick anyone off the team. That only creates a person who is likely to oppose the work of the team. At most, take someone off-line.

There are many different interpersonal dynamics that can impact the effectiveness of a team. An experienced facilitator will read Scholtes book many times for guidance on how to best manage these dynamics.

Chapter 9

1. Do you always need to involve a team when improving a process? Why?

2. Why is there a need for both a facilitator and a team leader? Is it always necessary to have both when working with a team?

3. Describe a meeting that you are aware of that worked very well. Why did it work well? Now describe one that did not work well. Why did it not work well? How did the meetings differ?

4. Discuss examples of common problems of groups. Have you seen any of these problems? How were they resolved?

References:

1. Scholtes, P. (1988). The Team Handbook. (pp.6-4 to 6-8). Madison: Joiner.

2. Scholtes, P. (1988). The Team Handbook. (pp. 6-36 to 6-45). Madison: Joiner.

3. Scholtes, P. (1988). The Team Handbook. (pp.6-36 to 6-46). Madison: Joiner.

4. Ury, W. (1993). Getting Past No. (pp.76-104). New York: Bantam.

Section 3

Related Tools: Further Understanding and Refining the Process

There are some very powerful tools that, when used in conjunction with SPC and performance improvement tools, significantly enhance CQI. Chapter 10 discusses finding and eliminating wasted effort. Once the process is in control and capable, it makes sense to:

• Eliminate all steps that don’t add value

• Reduce complexity that increases the likelihood of error

• Re-design the process so it is less stressful for those who work within the process.

The result will be a process that is consistently producing results that the customer wants and that involves minimum effort and time.

Once a process is in control and capable, it also makes sense to determine when the process is likely to fail, or go into failure mode. The process can then be modified to reduce the likelihood of failure and to minimize the adverse effects of failure when it does occur. Chapter 11 covers Failure Mode and Effects Analysis (FMEA), a tool to use to minimize process failure.

In the CQI world, there are several models that are useful to guide the CQI effort. It is helpful to understand the models that are frequently used. It is a little unnerving to be involved in a discussion about CQI and find others are using different models. The translation is fairly simple if you are aware of what model is being discussed. As such, Chapter 12 provides a brief overview of some of the most commonly used models.

Chapter 13 provides an overview of the cost of quality. An old notion of quality was that there came a point at which further quality improvements added cost without a corresponding reduction in expense. Taguchi’s work on loss functions (see Chapter 8) demonstrated the inaccuracy of this notion. He demonstrated that the least cost occurs when the output of a process exactly matches the target specifications (i.e., optimal performance). Therefore, the best way to minimize cost is to maximize quality.

Chapter 14 addresses a set of tools that typically are only mentioned in passing in most CQI texts. They are the tools associated with planning and implementing a process. It would be a terrible shame if all the work done to measure and improve a process was done accurately and in a timely manner only to have implementation of the solution be fouled up so process improvement is not attained. These tools are designed to reduce the likelihood of this happening.

Chapter 10 Finding and Eliminating Wasted Effort

There are things at work that we all do that don’t make much sense, that seem to be a waste of time, or that no one knows why such things are being done. It’s interesting to hear the responses of people when you ask them to describe five things they do at work that seem a little silly. The most common response: “Do I have to stop at five?”

In many, if not most organizations, there is a significant amount of work that can best be described as waste. Many of these same organizations are plagued by employee fatigue, problems with quality, higher costs than logically would be expected, or several safety problems such as higher rates of employee injury than would logically be expected.

There is a way to address these organizational deficiencies. Again, it takes place at the level of the process. Before attempting to address these deficiencies, it is important that the process be in the ideal state - both in control and capable - before undertaking this effort to reduce waste in a process. It makes little sense to be spending time trying to eliminate wasted effort in a process that is out of control. The results from a process that is out of control are unpredictable. Clearly, any time available should be spent eliminating whatever is producing the assignable cause variation so that the process is producing a predictable result. If this doesn’t happen first, then a substantial amount of effort will be wasted trying to hold the process together, fixing mistakes, dealing with customer complaints, etc. It makes the most sense to eliminate wasted effort created as a consequence of assignable causes before trying to eliminate waste within the process itself.

It also does not make sense to spend time trying to eliminate wasted effort within a process if the process is producing results that the customer does not want; it is not capable. Modify the process so that it is consistently producing what the customer wants. Then, reduce waste.1

Also, before you begin the effort to reduce waste, you need to determine the focus of the analysis. Are you going to focus on the tasks performed by those involved in the analysis? Or, are you going to focus on the process product? The first is referred to as a process task analysis; the latter is known as a process product analysis. These analyses can produce dramatically different results. As such, it may make sense to do both, but not in a single analysis. Consider how an analysis of what nurses do when a patient enters ICU. Now consider what it would look like from the patient’s point of view (in this scenario, the process “product” is the patient’s care). Two very different perspectives. The first is a task analysis; the latter is a product analysis.

Types of steps in a process

Once the process is operating in the ideal state, you can reasonably proceed to spend your time on trying to find how to eliminate waste within the process. This work, which we’ll refer to as a process analysis, begins with a detailed listing of all the steps in the process arranged in the order in which they occur. If you have completed a detailed flow chart, you have at least most, if not all, of this step complete. As in flow charting, it is important that this step be complete and accurate: all that follows depends on this step being done right.

Next, you will need to identify what type of steps are involved. There are six basic types of steps:

1. Operation

2. Transportation

3. Inspection

4. Delay

5. Storage

6. Rework

Operation

An operation adds value. It is a step for which the customer would be willing to pay something. For example, taking a chest x-ray would be an operation step for a patient suspected of lung cancer.

Transportation

This type of step involves moving something or someone from one place to another. For example, a patient may be transported by gurney from their hospital room to pre-op.

Inspection

This involves examining and/or authorizing something. Someone examines the work to see if it’s done right and/or authorizes work to proceed. A sponge count after surgery is an inspection; so is signing a form to approve purchase of new furniture for a waiting area.

Delay

Sometimes a process stops. If it is unscheduled, it is a delay. For example, a patient comes to the hospital for a 2:30 p.m. appointment in respiratory therapy. However, many of the respiratory staff are called to a Code in ER following a car wreck involving several people. The patient with the 2:30 appointment will be delayed until the staff members return to the department.

Storage

Sometimes an item will be placed in storage until some future time. It is a scheduled delay. For example, linen may be placed in a nurse server until needed by a patient. The linen is in storage.

There are two important differences between delays and storage. First, for things, delays are unscheduled, storage is scheduled. Second, delays happen to people or things; storage happens to things, not people. That is, if people are involved, it is always a delay – people are not placed in storage even if the delay is scheduled.

Rework

Occasionally, something happens that makes it is necessary to repeat work. Possibly a mistake was made and you have to fix or re-do the work. This is rework. It is work that would not have had to be done if the process had worked right the first time. For example, having to re-take an x-ray because the patient moved is rework.

The steps and the symbols used to denote each is as follows:

Operation

Transportation

Inspection

Delay

Storage

Rework ®

It is critical to understand that of the six basic steps of a process only one adds value: the operation step. The other five types of steps are wasted effort. Therefore, by listing all the steps and identifying the types of steps in the process, you will know where to focus your efforts to eliminate waste.

A Process Analysis Worksheet can be very useful when conducting a process analysis:

(Insert Exhibit X-1 about here)

Sometimes identifying the kind of step can get a little difficult. For help, talk to others who know the process. In actuality, the only critical distinction is between operation steps and the others since all the others are merely variations of the same thing, waste. Ask, “Would the customer be willing to pay for this step? Does the step truly accomplish something for the customer?” If the answer is “no,” then it’s waste. It may be helpful to know the kind of waste when re-designing the process, but it’s usually not critical.

Complete the first two columns of the Process Analysis Worksheet: list the steps in the order they occur; then, in the column labeled “Flow;” draw the symbol beside each step that best describes the type of step.

Goals of process analysis

Before you go any further, you need to decide why you are doing a process analysis. How you proceed from this point will be affected by your purpose. Your options:

1. Improve quality

2. Reduce wasted effort

3. Reduce cost

4. Make work easier

5. Improve safety

Let’s look at each of these.

Sometimes assignable cause variation is created by complexity in a process thereby reducing quality. It is important to always be on the lookout for complexity. The more steps there are in a process, the greater the total time involved in the process in most cases. The probability of error and, therefore, re-work also goes up. Complexity adds cost. Consider a process that has 40 steps, and each step has a probability of error of 4%, (assuming the occurrence of an error does not affect the probability of an error occurring in any other step) then the probability of no errors for the entire process is (.96)40 or 20%. If the process is simplified so the number of steps is reduced to 10, the probability of no errors is (.96)10 or 66%. Complexity increases cost.

The primary focus of process analysis is to eliminate wasted effort wherever possible. Where it’s not possible to eliminate waste, you should at least minimize it. Separating the steps of a process into operations (i.e., value-added effort) and various forms of waste sets the stage for the reduction of wasted effort.

Reducing cost is straightforward. All other things being equal, if less time is spent doing work within a process, the process will cost less. Moreover, if the amount of rework is reduced, the process will cost less.

Many people feel pushed at work: there is so much work to be done in the time available! By eliminating the amount of wasted effort involved in work, the overall amount of effort required to do the same work goes down.

Lastly, there are processes that include unsafe steps. By breaking the process into steps and analyzing the contribution of each step, those steps or combinations of steps that create unsafe conditions often become apparent. The challenge then becomes changing the steps so they are safe.

Measurement

A major reason you need to decide what it is that you are trying to accomplish by doing a process analysis is that decision will determine what you measure. Do you measure the cost of each step, time of each step, number of injuries per step, etc.? The choice is determined by what it is you are trying to accomplish.

Measurement is a critical part of process analysis. The measurement for each step is recorded on the process analysis worksheet on the same line as the step in the column labeled “Min.” If you are measuring something other than minutes per step, simply cross out “Min” and write in whatever it is that you are measuring. Then, record your measurements.

Analyze the data

There are several things you can do to analyze the data. A good first step is to finish the process analysis worksheet by placing a dot in the column next to each step that represents the flow symbol. Next, connect the dots (see Exhibit X-2).

(Insert Exhibit X-2 about here)

Look at patterns. For example, see if there are several consecutive steps in which no operations are performed. If so, look carefully at how this can be changed so that most of these waste steps can be eliminated.

You also can look at the process flow. It’s sometimes easy to do so by copying the flow from the Process Analysis Worksheet and just examining the process flow. For example, you might have a process flow that looks something like this:

As we’ll discuss later, analysis of process flow can lead to significant reductions of waste. A third tool that can be used to analyze the data is a Data Summary Chart (see Exhibit X-3).

(Insert Exhibit X-3 about here)

This handy tool can tell you at a glance how well your process is functioning.

If you divide the amount of time spent performing operation steps by the total time spent in the process, you have an idea of the process efficiency:

Operation time

———————————— x 100% = Efficiency

Operation time + Waste time

Alternatively, dividing the time spent performing waste steps by total time reflects the inefficiency of the process.

Waste time

———————————— x 100% = Inefficiency

Operation time + Waste time

Of course, the goal is to maximize efficiency/ minimize inefficiency.

A fourth analysis tool that is available is a pareto chart. For example, you can chart the amount of time spent on each type of process step. The most desirable pareto chart will show that the vast majority of time is spent in operation steps. The less the pareto chart reflects this ideal, the more opportunity there is for improvement (see Exhibit X-4).

(Insert Exhibit X-4 about here)

An important consideration when looking at the data from a process is the total time it takes to perform the process. One of the goals of process analysis is to minimize total time. The approaches to reducing total time will be discussed in the next two sections.

Improve the process

There are several things you can do to improve a process. Of course, what you will actually do depends on the reason you are conducting a process analysis. Consider some of the options that are available to improve processes:

1. Eliminate steps

2. Minimize time spent on steps

3. Reduce process complexity and simplify

4. Combine steps

5. Change the sequence of steps

6. Use technology to do some of the work

7. Redesign the process

Wherever possible, eliminate waste steps. When it’s not possible to eliminate steps, then minimize the time spent on these steps. Eliminate or minimize: the two quickest ways to reduce waste in a process.

Look carefully at the process flow; maybe draw a workflow diagram. When used in process analysis, workflow diagrams are changed slightly; the number of each step is included on the workflow (see Exhibit X-5).

(Insert Exhibit X-5 about here)

If several people are involved in a process, maybe it makes sense to do more than one step at a time. That is, have one person do one step, another do a different step.

Look at the order in which steps are being performed. Would changing the order reduce time, safety risks, costs, etc.? If so, change the order.

Can technology alter the process in a desirable fashion? Instead of having small items delivered to various parts of the hospital by a person, could they be sent by a pneumatic tube system? In fact, that is exactly what Sir Gordon Friesen advocated when he altered the way hospitals were designed many years ago.2 Now, medications and supplies are being delivered to the nursing units by robots in some hospitals. Undoubtedly, there are many ways yet to be developed that will permit technology to improve processes. Be creative!

Redesigning the process may have substantial payoff. Most processes were never designed; they evolved through a series of tweaks and twists. Thoughtfully examining the steps of a process often will reveal its weaknesses.

Redesigning the Process

We’re going to devote a section to process redesign since it is more involved than the other approaches to reducing waste in a process. If you recall, the flow of the various steps in a process can be taken from the Process Analysis Worksheet and studied on its own. An example of a process flow could be:

This is a linear process in that any single step does not begin before the prior step is complete.

Another kind of process flow could be:

In this process, two things happen simultaneously. Neither is dependent on the progress made in the other process flow. These are parallel processes.

A third type of process flow may look like the following:

This process breaks into parallel processes after starting out as a linear process. This is a divergent process.

A fourth type of process may be:

In this type of process, parallel processes come together at some point to form a single linear process. This is a convergent process.

That is, there are four basic types of processes:

• Linear

• Parallel

• Divergent

• Convergent

Divergent processes frequently will be seen with a decision point:

Decision points are represented by a diamond.

What this indicates is a decision is made at a particular point to diverge the process. Decision points can be especially useful when designing a contingent process flow that will be initiated when something goes wrong. If everything goes OK, then follow one path. If not, follow the other. This reduces complexity in the design of the process since contingent process flows don’t have to be built into the process. Work can diverge into a parallel process when something goes wrong.

Note that in developing a process flow, the level of detail is up to the designer. It is possible to become highly detailed or not. The appropriate level of detail is a matter of judgment.

Evaluate the improvement

Carefully review the information you have to this point. By now, you should be very familiar with the process, know how much waste is involved, be aware of complexity and “weirdnesses,” etc.

Now, start trying out alternative process designs. Is there a better way? There almost always is. Be sure to involve SMEs in this step. In fact, ask for ideas from many SMEs, if possible.

In conjunction with a team of SMEs, develop the alternative process design that is most likely to meet the goals of the process analysis. Implement the change.

You may wish to try out the change first by doing a pilot study, gradually phase in the change, or just completely switch to the new process. If the change is large and/or expensive, or failures in the process have serious and/or expensive consequences, you may wish to start with a pilot.

The first thing to look at after changing the process, is the control chart. Is the process out of control in a direction that is undesirable? If so, this is an indication that the process change is problematic. Of course, if the process goes out of control in a desirable direction or it remains in control, then celebrate. The redesigned process has passed the first test!

Now, look at capability. In order for the new and improved process to pass the second test, it must continue to be capable.

Third, develop a Before-and-After Chart, a variation of a Data Analysis Chart (see Exhibit X-6).

(Insert Exhibit X-6 about here)

This chart shows a summary of the process before and after it was redesigned. It should reflect reduced waste and, in most cases, reduced cycle time.

Lastly, talk to the SMEs, talk to the vendors, and most definitely, talk to the customers. See if they recognize an improvement in the process, or at least not a worsening.

Evaluate all your results. See what you’ve got.

Example

Patient Accounting in any hospital is an extraordinarily busy place. There are a multitude of bills to be prepared involving an enormous amount of information. The people in the Patient Accounting Department asked for some relief from the workload they were experiencing. Working with a Process Engineer, they decided to reduce wasted time in the process involved in sending summary bills to patients.

They decided to conduct a process task analysis. That is, they wanted to see what is actually done in this process. They said that the first step of the process was when the cashier receives the summary bills from Information Systems, and the last step of the process was when the summary bills are mailed. Their focus was to reduce the amount of time spent in the process thereby freeing up some time to do other work and to reduce the pressure they felt.

They began by completing a Process Analysis Worksheet (see Exhibit X-7).

(Insert Exhibit X-7 about here)

They found the cycle time per bill was seemingly very short. However, when multiplied by the number of summary bills produced, the process took up a great deal of time.

They next produced a Data Summary Chart (see Exhibit X-8).

(Insert Exhibit X-8 about here)

They found that the cycle time was 1.89 minutes per bill and, of that time, 1.40 minutes were waste. That is, 74% of the time spent on this process was waste. Obviously, this process is a good candidate for waste reduction.

They conducted several team meetings and developed what they believed to be the process that would most effectively address their needs. They developed a Process Analysis Worksheet of this process (see Exhibit X-9).

(Insert Exhibit X-9 about here)

They also developed a Before-and-After Chart (see Exhibit X-10).

(Insert Exhibit X-10 about here)

Although the department has not yet attempted implementation of all the changes to their process, it appears that this redesigned process can save 1.32 minutes per cycle, or 70% of the time spent in the process. This should lighten the load of the people in Patient Accounting.

If this process had previously been monitored by a control chart, once the redesigned process was implemented, the manager in this area would want to monitor the control chart very carefully to be sure the process remained in control and capable.

Conclusion

Process analysis involves the application of some very simple tools along with a little creativity and discipline to make some very big improvements. Moreover, it is effective. It is not at all uncommon to see substantial reductions in wasted effort.

When coupled with the tools of process improvement, it is very likely that you will see reductions in waste with no reduction in quality. These are the tools of good management.

Chapter 10

1. Using the workflow log you prepared in Chapter 2 (#1), develop a Process Analysis Worksheet.

2. Prepare a Data Summary Chart.

3. Suggest changes to the process. Develop a Process Analysis Worksheet depicting the modified process. Did you decrease complexity in the process?

4. Prepare a Before – and – After Chart. Did efficiency improve?

References:

1. Harbour, J. (1994). The Process Reengineering Workbook. (pp.25-27). New York: Quality Resources.

2. Friesen, G. (1986). Personal Communication.

Chapter 11 Failure Mode and Effects Analysis (FMEA)

A tool that potentially has significant value in the effort to improve healthcare is failure mode and effects analysis (FMEA). While CQI tools are focused on consistently delivering what the customer wants, FMEA is focused on minimizing the probability of the process breaking down, or failing, and minimizing the effects of process failure on a customer.

The various ways in which a particular process can break down are referred to as “failure modes.” The consequences, or “effects,” of each failure mode vary in terms of likelihood of occurrence. Moreover, the consequences of the effects can vary in terms of severity. Failures also can vary in terms of the likelihood of the failure being detected before the effect occurs.

Ideally, a process would experience no failure modes. However processes do fail. As a consequence, mechanisms need to be in place to alert those involved that a failure has occurred and this alert must be given as soon after the process has gone into failure mode as possible. In addition, things must happen to minimize the effects of the failure. This is the purpose of FMEA:

1. Minimize the frequency of a process going into failure mode.

2. Detect the failure mode as quickly as possible.

3. Minimize the effects of a failure mode.

The value of FMEA to healthcare is obvious.

For example, if a procedure is being done in the Radiology department that presents some risk to a patient, the caregivers must be very familiar with the process involved, the possible failure modes, indicators of failure, and steps to take to minimize the effects of process failure. Anyone familiar with healthcare is aware of situations where the caregivers were not adequately trained or were careless, a process went into failure mode, the failure was not detected until it was too late, and / or the severity of the effects was not properly mitigated. Patients have paid an extraordinarily high price in some of these situations.

There is little doubt that FMEA will serve a useful function in healthcare.

Conducting a FMEA has clearly defined steps.1 However, FMEAs, can be significantly different in how each of these steps are carried out. (See Exhibit XI-1)

(Insert Exhibit XI-1 about here)

FMEA can take very little time or a significant amount of time. It will depend on complexity of the process, available data, analytical ability of team members, ease with which plans can be implemented, etc.

1. Stamatis, D. (1995) Failure Mode and Effect Analysis: FMEA From Theory to Execution. Milwaukee: ASQ Quality Press.

Chapter 12 Other Models of Process Improvement

At the beginning of this training program, we presented the model used by Saint Francis Medical Center for process improvement. The four-step scientific model is shown in Exhibit XII-1.

(Insert Exhibit XII-1 about here)

We selected this model over others that are used elsewhere largely because it approximates the scientific method which many of us—especially those in clinical areas—are already comfortable. It also seemed to be the model that will be most easily grasped by the physicians since much of what they do is based on the scientific method.

Even though we didn’t select one of the other methods, it is important that you have some awareness of other models that are being used in other organizations. As you read about process improvement, talk to others about what they are doing, attend seminars, etc., you will encounter these other models. So that you can have a greater understanding and be better able to communicate with others, this chapter will address some of the most common models used in other organizations.

PDCA

The most commonly used model is referred to as Plan-Do-Check-Act, the Deming Cycle, or PDCA. A more recent variation replaces “check” with “study” but essentially is the same thing.

This model, developed by Shewhart and adopted by Deming 1, says that processes should be addressed through four steps:

Plan

• Establish a need for process improvement

• Conduct a root cause analysis

• Generate potential solutions

• Plan a test of the most likely to be effective solution

Do

• Test the most likely to be effective solution on a small scale (i.e., conduct a pilot test)

Check

• See if the pilot test works (Deming preferred the term “study” later in his career for this phase, as it indicated that it takes more than merely checking to see if the pilot worked—you need to study the results of the pilot test to understand what happened.)

Act

• Implement the effective solution into the process.

Deming arranged the PDCA model into a cycle (see Exhibit XII-2).

(Insert Exhibit XII-2 about here)

The reason he used a circular depiction of his model was to show that it never ends. Once a process has been improved, then the planning begins for further improvement. He did not intend that the cycle be used rigidly. If a process improvement plan did not work, he would suggest returning to some prior step in the cycle to alter the plan so that a successful process improvement could, in fact, be attained.

FAST-PDCA2

This model adds a few preliminary steps that presumably can speed up the PDCA cycle:

F- Focus on a specific goal for the process improvement effort that is significantly

beyond current performance (e.g., decrease p by 20%). Pull together the

resources you’ll need to make the change.

A- Analyze the data and do a root cause analysis.

S- Select potential changes and the most likely to be effective change.

T- Test

Plan the test

Do the test

Check to seek if the test worked

Act on the knowledge gained

The primary advantage of this approach over PDCA is the process improvement effort is more carefully defined and planned before initiating the PDCA cycle. Hence, it’s more likely to work the first time. Overall, not a huge difference.

FOCUS-PDCA3

This model is the most thorough of the models we’ll discuss. In this model, there is significant preparation before initiating PDCA.

F- Find a process to improve.

Carefully select which process will be the focus of the improvement effort. It should be important to customers and clearly linked to the mission and strategic plan of the organization. As such, information provided by the customer about the service or product is analyzed as well as data concerning the performance of the process and strategic and operational plans. Based on these analyses, the process to be improved is selected and a measurement is developed that will show the effects of the improvement effort.

O- Organize to improve the process.

In this phase, all the resources needed to improve the process are gathered. A decision is

made to assign the improvement effort to an individual or a team. If the decision is to

use a team, a facilitator and a team leader are selected. Preliminary project plans,

including estimated financial and equipment needs are initiated. The team meets to

clarify roles and agree on ground rules.

C- Clarify current knowledge of the process.

A flowchart is developed depicting how the process works. Or, if the process is

performed in more than one way, a flowchart is developed for each. The team will select

the flowchart that is most likely to work best and will clarify the situations that warrant

varying from the best flowchart.

Quick and easy improvements are make; Kirk calls this “picking the low hanging fruit.”4

Workers are trained on the best approach to the process as well as any necessary

variations. That is, the process is standardized.

U- Understand types and sources of process variation.

In this phase, the measurement that was selected, the key quality indicator (KQI), is

studied. Assignable cause variation is eliminated so that the process is in control.

S- Select the solution that is most likely to improve the process.

Using tools previously discussed, the solution that is most likely to improve the process is

selected and any necessary approvals to make the process changes are obtained.

Now the PDCA cycle starts with the planning phase. It continues until the process improvement

goal is attained.

In actuality, there are not dramatic differences in the various methods. It’s more a matter of emphasis. Kirk’s model and PDCA are very similar. FAST-PDCA is intended to be more of a quick fix. FOCUS-PDCA is intended to be more methodical and thorough.

Chapter 12

Discuss the various models. How do they differ? What are the implications of each model for how you would approach process improvement?

References:

1. Handbook For Improvement: A Reference Guide for Tools and Concepts. (pp.19).

Brentwood, TN: Executive Learning.

2. Handbook For Improvement: A Reference Guide for Tools and Concepts. (pp.20-21).

Brentwood, TN: Executive Learning.

3. Handbook For Improvement: A Reference Guide for Tools and Concepts. (pp.23-35).

Brentwood, TN: Executive Learning.

4. Kirk, R. (1994). Quality management and leadership skills. Training program presented at

Saint Francis Medical Center, Grand Island, Nebraska.

Chapter 13 Cost of Quality

At some point, you will want an answer to the question of what is the bottom line of the improvement effort—what is it costing your organization and what actual cost saving have you experienced? In the past, management in many organizations viewed a quality improvement effort as another expense rather than as an opportunity for savings. They say devoting as few resources as possible to ensuring quality was a means of controlling costs. This mistaken notion survived for many years because hard data was not available with which to document the costs and savings associated with quality improvement efforts. This data can now be made available.

Categories of Quality Costs

Quality costs fall into two distinct categories. Those that are incurred because of poor quality potentially can exist are known as the costs of quality control; these costs are further divided into prevention costs and appraisal costs. The other category of quality costs is incurred because poor quality actually occurs. This category is referred to as failure costs and consists of the costs of internal and external quality deficiencies.

Before describing how the various quality costs interact with one another, we first describe the four categories of quality costs: preventive, appraisal, internal failure costs and external failure costs. Exhibit XIII-1 provides a listing of several types of quality costs in the four categories. We will discuss examples of each as associated with their respective cost category.

(Insert Exhibit XIII-1 about here)

Quality Control --- Preventive Costs

Preventive costs are seen to be the key to an understanding of the savings associated with

continuous quality improvement in an organization. There can be no doubt that doing it right

the first time is less costly than correcting problems associated with poor quality planning or

execution. It has been repeatedly shown that the sooner quality issues are addressed in a

process, the less costly the detection and correction of these problems will be. Prevention is

the least expensive way to reduce the costs of quality to the organization.

• Quality Planning

Quality planning costs account for all of the time that the organization devotes to designing the quality control effort. This includes deciding on and developing procedures, methods and required documentation. The costs represent the effort expended in deciding how best to accomplish work tasks. Functions such as reliability studies, composing work instructions, deciding on test procedures and the like are all included in quality planning costs. Capturing the true extent of planning costs is difficult using normal accounting procedures. Therefore, a separate system of cost collection will have to be set up if the majority of planning costs is to be captured.

• Process Controls

This category of preventive cost includes the time spent in studying and analyzing the processes of the organization and its suppliers. Included are the costs of setting up and employing control charts, failsafe mechanisms and conducting process capability studies.

• Design and Development of Quality Information

These preventive costs capture the time that personnel devote to the design and development of methods and equipment, which will gather data, and adjust or measure some part of the process. Essentially, it represents the implementation cost of the procedures, which are decided upon in the planning stage.

• Quality Training and Workforce Development Programs

This category of preventive costs captures the costs of implementation and dissemination of

the quality systems to all levels of employees throughout the organization. It should not

include job training per se, but rather training in the quality systems of the organization. In the hospital environment, training in CPR or how to take blood pressures would not be included as quality cost. However, training in control charting, how to employ the numerous process improvements tools and so on would fall into this category of preventive costs.

• Quality Team Administration

The costs of administration that is associated with quality teams should be recorded. As with all categories of quality costs, management will be interested in the total costs and the total savings of the quality effort. Quality improvement teams need monitoring, training and facilitating. Such costs need to be recognized in the quality cost accounting system, which the organization develops.

• Design Verification

Any costs that are associated with verification of quality, reliability or safety aspects of a procedure or a proposed change to a procedure.

• Systems Development and Management

Quality systems development costs and management and/or support costs for the preventive efforts are included in this type of preventive cost. Redesign of information systems that takes place in order to provide needed information to the quality improvement effort belongs in this category. Also, the cost of management involvement in the preventive effort is accounted for by this category.

• Procurement Planning

The cost of development and employment of quality procedures associated with outside suppliers. This includes selection and evaluation procedures for existing and proposed suppliers to the organization; establishment of quality standards, safety requirements and required inspections and tests, packaging standards and post-purchase notification requirements when quality problems are discovered by the supplier.

• Reliability Studies

Any costs associated with conducting internal reliability studies or of reviewing supplier studies are preventive cost.

• Actions Taken Identification Systems

Ensuring that the ability to trace actions taken (e.g., medications given) requires that records be meticulously kept on each procedure. In clinical areas, the patient’s chart may provide much of this information. However, if any gaps in the information trail with regard to a patient are noted, the gaps should be remedied by developing a tracking procedure.

• Vendor and Patient Surveys

This category represents the effort to go outside of the organization in order to identify potential or actual quality challenges.

• Overhead Associated with any of the above

Any overhead costs of prevention not captured in the above categories need to be estimated.

Quality Control ---- Appraisal Costs

Appraisal costs include the expenses associated with efforts to detect poor quality goods services. Appraisal efforts focus on inspection of raw materials, work in process and finished

products. The main objectives of appraisal efforts are to monitor the quality of goods and services being provided and to prevent poor quality goods or services from reaching the customer. The organization incurs both preventive and appraisal costs in the effort to prevent poor quality product or service from reaching customers. Appraisal costs include, but are not limited to the following areas:

• Test and Inspection of Purchased Material, Work in process or finished work

Included in this area would be the time that employees spend engaged in testing activities both

on-site and at suppliers’ facilities. Training in test and inspection procedures would be

included as an appraisal cost.

• Labor Time Spent Checking Own Work

In many organizations that have adapted a TQM philosophy, the role of testing and inspection is assigned to the employee who actually completes the product or service. The purpose of self-checking is to find problems as near as possible to the time and source of their occurrence. The cost of appraisal is difficult to determine for these employees as they are acting as both producers and appraisers of services simultaneously. The percentages of time devoted to test and inspection may be estimated or arrived at through time studies or work sampling methods.

• Set Up Time for Test and Inspection

In addition to the actual time that is taken to perform the test or inspection, any time that is devoted to making ready for test and inspection would be an appraisal cost.

• Test Equipment

The acquisition cost of test and inspection equipment must also be considered an appraisal cost. This would include items such as gages, flowmeters and any other measuring devices. If the testing or inspection function of a piece of equipment is an inherent part of a piece of equipment used for other reasons, the cost for testing or inspection probably would not be separable from the equipment itself and, therefore, would not e an appraisal cost.

• Test Materials

Report forms and other supplies and materials consumed or used in the test and inspection process should be considered as appraisal costs.

• Maintenance and Calibration of Test Equipment

Preventive maintenance expense, repair costs and the cost of periodic re-calibration of the inspection equipment occur as a part of normal appraisal and are thus included.

• Outside Endorsements

Costs associated with accreditation reviews and other outside endorsements are also classified as appraisal costs. This would include the costs of review by JCAHO, HCFA, etc.

Poor Quality Occurs ---- Failure Costs

The next two categories of quality costs, internal and external failure costs, are those costs

that are incurred because poor quality exists. Internal failure costs focus on costs of quality

problems detected before the product or service reaches the ultimate customer. External

failure costs are those incurred when the product or services reach the ultimate customer

before the quality problem is detected. Failure costs often make up the largest percentage of

total quality costs

• Scrapped Product

Items that are found to be defective through the preventive or the appraisal process, which are subsequently discarded are classified as scrapped product (e.g., defective x-rays). Discarding supplies, which may have become contaminated, would be an internal failure cost. Included in internal failure costs also would be the cost of the disposal of the scrapped product, the product cost itself and any labor time spent associated with disposal efforts. Obsolete materials are not classified as a quality cost unless the obsolescence is the result of a failure of the quality system itself.

• Additional Materials Necessary

The cost of obtaining replacement materials for contaminated or other scrapped product is an internal failure cost. This cost includes the time spent on procurement of replacement items including identification of new suppliers.

• Equipment Downtime and Equipment Adjustment

The lost revenue due to equipment being unavailable due to a quality problem is classified as an internal failure cost. Therefore, if an outdated batch of x-ray film results in the inability to take x-rays, the lost revenue is a quality cost. Should poor quality materials damage any equipment, the downtime costs are internal failure costs. Any required adjustment to equipment as a result of the above should also be accounted for as a quality cost due to internal failure.

Poor Quality Occurs ---- External Failure Costs

The most expensive, and by far least desirable type of quality cost, occurs due to external failures; that is, the customer directly experiences the effects of poor quality. This category of quality cost occurs as a consequence to the failure of preventive and appraisal systems.

• Complaint Response

When the customer becomes aware that they have experienced poor service from the organization and makes this fact known to the organization, the cost of response to the complaint is external failure cost. Costs included would be any replacement item, the labor time devoted to correction of the problem and any other cost directly related to responding to the customer complaint.

• Service

Essentially, service costs are those costs incurred to correct imperfections of product or service, which has been previously provided to a customer. For example, should outdated x-ray film be used for patient diagnosis, any costs of retaking the x-ray and reading it becomes an external failure cost.

• Surveys

The costs of “after service” surveying of customers taken to detect and respond to actual quality problems which occurred for customers, are external failure costs.

• Liability

In the medical field, perhaps the largest knowable cost of quality failure is that of liability costs. Legal and court expenses of both won and lost cases; settlements and court-awarded judgements are included.

• Opportunity Cost of Lost Customers

W. Edwards Deming voiced the opinion that the opportunity costs of lost customers was the largest of the quality costs; yet the actual amount of this cost was unknown and unknowable. The organization can never know for certain why a patient chooses another health care provider for subsequent treatment. However, it is reasonable to assume that a patient will hesitate to return to a facility that left the impression of giving less than the best care. It has been estimated that a dissatisfied customer will tell five to ten people about their experience. Thus, poor quality service by a hospital can have far-reaching and long-lasting negative consequences. The reverse is also true. That is, leaving the impression of exemplary treatment can have far-reaching and long-lasting positive consequences.

Behavior of Quality Costs

External failure costs is the most costly of the categories. If you focus on inspection and detection of quality problems, you will soon find that you have made a trade between the cost of external failures and the cost of internal failures. That is, you catch your quality problems before they impact on the customer. Normally, the increase in appraisal and internal failure costs will be much less than the cost of external failures. Therefore, an emphasis on inspection is better than a reactive strategy. However, the most cost-effective quality cost strategy is a focus on preventive efforts. While preventive efforts can be low in cost, the savings generated by an extra dollar spent on prevention far outweigh the costs. Prevention can drastically reduce or eliminate, in some instances, internal and external failure costs. Many firms have reported that a dollar spent on prevention results in savings of anywhere from five to twenty dollars. As preventive efforts become more sophisticated and as the organization becomes more confident in the efficacy of the preventive measures, inspection and testing efforts can be reduced or eliminated altogether. Thus, the emphasis on preventive reduces costs in the other three qualities cost categories.

Quality Cost Accounting Systems

While many of the costs associated with quality cost accounting are available from existing records, they are rarely set up to be reported as quality costs. Therefore, accounting systems will need to be modified to separately collect and report quality costs. Where costs are not presently being separately captured, new collection systems need to be established or cost estimates need to be made. For example, the organization may retain legal counsel for a variety of tasks. The organization will need to determine how much of the legal costs should the organization allocate to quality costs and upon what data should the organization base that decision?

Once established, a quality cost accounting system can provide the cost and savings of appraisal and preventive efforts, thereby, providing management with the value of these efforts

Conclusion

Knowledge of the organization’s costs of quality provides it with much needed information as to the effectiveness of quality improvement effort. As management experiences the positive results of expenditures to improve systems through preventive or appraisal efforts, future efforts are more readily welcomed. As a consequence, a culture of quality will become ingrained within the organization. Quality costs serve as a measurement tool, as an analysis tool and as a budgeting tool for the organization, which seeks to improve quality.

Chapter 13

1. Describe the quality costs that are associated with the work that you do. Categorize these costs into the four categories of quality costs. Explain why you have assigned each of the costs to its respective category.

References:

Crosby, P. Quality is Free, New York: New American Library, 1979.

Feigenbaum, A. Total Quality Control, New York: McGraw-Hill, 1991.

Sluti, D. Linking Process Quality with Performance: An Empirical Study of New Zealand Manufacturing Plants, Phd Dissertation Auckland University, 1992.

-----------------------

Constant sample size?

Area of opportunity constant?

Measuring defects or defectives?

Is the data variable or attribute?

Individual measurement or groups of measurement?

* If the assumptions for attribute charts cannot be met, then use a

XmR chart

Defectives

Outputs to Customer

Our

Actions

Supplier

Inputs

nP

Chart *

p

Chart *

Yes

u

Chart *

Defects

c

Chart*

X & R

Chart

Measurement

LCL

CL

Patient Waiting Times

33 41 17 22 6

29 12 44 16 30

49 37 38 28 48

15 20 29 27 16

37 22 32 27 36

41 18 14 41 19

18 17 21 35 13

5 15 42 28 7

20 26 39 12 23

26 22 24 33 30

1. Pilot/test new system, check results, assess.

2. Tweak or revise if needed.

3. Target related or new issues to address.

4. Standardize processes, communicate details to all involved, train.

5. Implement, monitor and evaluate results.

Key questions to ask

• Are results meeting both outcome and process targets?

• Are there any changes to make before standardizing?

• How will we implement?

2.

CAUSE ANALYSIS

& EVIDENCE

Seek root cause(s) or breakdowns in the process. Verify with data to point the way to workable solutions/procedures.

1. Identify a broad issue, process, or procedure.

2. Brainstorm and discuss all aspects.

3. Prioritize, select, and clarify one aspect to pursue.

4. Validate that the issue is worth the time that will be spent to improve.

5. Identify and clarify improvement goal.

Key questions to ask

• What is the problem, issue, or gap in service?

• Is this the core issue or is this a complex issue?

• Which aspect is a customer priority?

1. Use knowledge of root cause or process failure to develop or brainstorm realistic and workable solutions/improvements.

2. Select best solution based on predefined criteria.

3. Using representatives from all involved departments, create a cross functional action plan for pilot testing, evaluation, and implementation.

Key questions to ask

• What are the possible solutions?

• What criteria will help ID the best one?

• Which one is best and how will we test it?

4.

CHECK, ADJUST, STANDARDIZE

Evaluate solution or new procedure and assess its efficiency and effectiveness. Cheer or revise, and standardize.

1.

PROJECT SELECTION

Identify a problem, process, or improvement opportunity, clearly define it, and describe why it’s important to work on.

1. List possible causes of the problem or breakdowns in the process.

2. Prioritize perceived causes or process disconnects.

3. Select data to collect that will support, rule out, or validate cause of failure.

4. Collect and display data. Repeat until all causes or disconnects are verified and consensus is reached.

Key questions to ask

• Why is this occurring?

• Of the possible causes, which are worthy of the labor of data collection?

• What does the data say? Does it support the theory?

3.

SOLUTION

PLAN AND

PLOT

Select a solution or a process improvement idea and plan for a successful implementation in the organization.

Stove

Salad Maker’s Table

[pic]

(100 - p) p

n

(100 - p) p

n

[pic]

(100 - p) p

n

[pic]

[pic]

Frequency

Measurement

UCL

Cook’s Table

[pic]

Based on R. Kirk (1994). Quality management and leadership skills. Training program presented at Saint Francis Medical Center, Grand Island, NE.

good enough

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download