Running head: CONTROL CHARTS IN SOCIAL SERVICES



Running head: CONTROL CHARTS

Application of Statistical Process Control Charts in Small Human Service Programs

Bruce K. Barnard

Eastern Illinois University

Application of Statistical Process Control Charts in Small Human Service Programs

Statistical quality control methods, including the use of control charts, are widely accepted in manufacturing companies as a quality improvement tool (Montgomery, 2001). Control charts have also been used in service organizations. Healthcare providers have used control charts to monitor hospital-acquired infections (Morton, Whitby, McLaws et al. 2001) and organizational performance (Hantula, 1995). Statistical methods have also been proposed to reduce the cost of meetings and focus on measurable objectives in behavioral health organizations (Hayes, 2000). Control charts have been proposed to evaluate social work research data in single study designs (Orme & Cox, 2001), and to monitor staff performance in a state-operated residential treatment facility (Sluyter & Keating, 1994). This paper will explore the use of control charts to improve client service at the small program and individual practitioner level, and provide an example of how control charts can be used in an individual counseling practice.

A basic assumption of statistical process control is that all processes exhibit some degree of variation (Montgomery, 2001; O’Con, 1997). By determining what can be expected from a normal process, known as capability, we can isolate and determine the causes of process variation. A process that is relatively stable and producing the desired outcome with only chance variations is said to be in control. Variations in the process can be classified as either chance causes or assignable causes. Assignable causes may be generated by human action, by a machine, or by materials (O’Con, 1997). The purpose of a control chart is to monitor a process operating in control in order to differentiate between variations caused by chance that represent normal statistical variance and variations, with assignable causes that require investigation and corrective action. Restricting action to those variations with assignable causes eliminates potentially costly and time-consuming analysis and responses to variations resulting solely from chance.

The availability of economical software packages has put statistical process control methods as a decision-making tool within reach of most practitioners and programs. Points of variance that lend themselves to statistical process control methods in social services include: access to service, length of sessions, quantity of billable work, actual performance as compared to best practices, and client satisfaction (Hayes, 2000). Orme & Cox (2001) have proposed using control charts in direct care to identify nonnormative outcomes to an intervention triggering an increase or decrease in services, or a search for causes by the clinician. Statistical methods are most frequently applied to client reported outcomes and satisfaction with services.

Counselors and practitioners are accustomed to working on and generating solutions to human problems. It could be said that their work is as much art as science. Often, the first inclination of a practitioner when confronted with a problem is to talk to colleagues about the problem and generate possible solutions. Hayes (2000) observes that the training and inclination of those who work in the behavioral healthcare field results in opinion-driven meetings and discussions that are often unproductive. While trained in research methods related to the social sciences, most practitioners have shifted the focus of their day-to-day work to addressing client issues. A survey of competencies valued by addiction counselors revealed that an understanding of research and outcomes related to clinical practice was the lowest ranked choice, believed to be less important than philosophy and practice models, family and community supports, and an interdisciplinary approach (Barnard, 2004). The use of statistical methods at the program level is often limited to measuring client satisfaction, if such methods are used at all.

To implement statistical process control effectively it must be part of an overall quality improvement program (Montgomery, 2001). For the independent practitioner or small program, this should include clarifying the mission and commitment to quality, maintaining a focus on customer satisfaction, and measuring performance. These basic steps are often overlooked because of a misguided belief that they are not necessary in a small practice. In fact, the process of clarifying the mission and commitment to quality is beneficial to any organizational effort regardless of size (Besterfield, Michna, Besterfield, & Sacre, 2003).

Orme & Cox (2001) have identified a number of challenges to implementing control charts in single subject data that are applicable to their use in small social services practices. They point out that control charting has been primarily developed for evaluating manufacturing processes and the processes in human services and social work vary significantly from manufacturing.

A process should be selected that is important to the desired quality outcomes, lends itself to statistical analysis, and where sufficient data is available to use statistical techniques. Orme & Cox (2001) state that in application of control charts to social services, “a challenge is that most often specifications for acceptable outcomes are unavailable” (p. 125). This may actually be an argument for the use of control charts. The collection and analysis of historical data can measure the process against its historical performance and provide a window into performance that can be used in improvement efforts. . This approach, coupled with external benchmarking, may provide the beginnings of a quantifiable improvement program that monitors outcomes against past performance, the performance of similar programs, and alternate approaches.

Another challenge is the limited number of observations available in small human service practices. Orme & Cox (2001) observe “with small sample sizes any method for analyzing data, visual or statistical, is prone to error (p. 125). Statistical methods are available to determine the necessary sample size and sampling frequency (Montgomery, 2001). Care should be taken to select a process where sufficient data is available for control chart methods to be useful. In some cases, the time and effort necessary to collect the data may constitute an additional barrier. In a small program it may be advisable to select a process where data is already available.

Human service processes have a large number of inputs and variables. Practitioners are often left wondering whether observable markers indicate an outcome of an intervention or a response to other environmental factors. While complicating analysis, this is not necessarily an argument against the use of control charts but rather an indication of the need for thorough causal analysis.

The following example illustrates how an individual practitioner offering counseling services might use control charts and causal analysis to monitor client completed and failed appointments (no-shows). This process was selected for the following reasons.

One, the process is essential to desired outcomes. No progress will be made in counseling unless the client attends scheduled sessions. Further, the process is essential to the business operation as payment is dependent on appointments attended by clients.

Two, the process offers an adequate number of observations to support statistical analysis. The number of no-shows and made appointments will be recorded as a mean over time. The length of time observations will be recorded can be adjusted based on the frequency of scheduled appointments.

Gathering data should require a minimum of additional effort and cost. The simplest approach would be for the practitioner to review appointment book records once a month and record made and missed appointments.

External customer specifications are not available for no-shows. However, a group of practitioners could share data and the individual practitioner could use data from others to provide an external benchmark. Comparison of business or marketing practices or clinical approach may be extremely helpful to all participants in completing causal analysis. If the practitioners regularly discuss quality issues and business practices, they could in effect form a quality steering committee that would be beneficial to all participants.

In many applications it is necessary to determine a sampling frequency. For example, in a manufacturing setting we would determine how often samples are pulled from the process for testing. Statistical methods are available to determine sampling frequency (Montgomery, 2001). In this case, we can easily sample the entire universe of data simply by entering data for all appointments. For an alternative method involving sampling we would simply look at a representative sample, i.e. gather data from the second week of each month.

Building the initial control charts is relatively simple. Using historical data for the past year the practitioner begins construction of the control charts. Number of made appointments and number of no-shows per week are recorded for 24 months. A two-year period will provide an opportunity to observe seasonal variations. In this case, we will construct control charts to monitor no-shows. We will be using the average or mean number of no-shows per week to construct our control charts.

The following steps are used in statistical process control (P.P. Liu, personal communication, September 1, 2005).

1. Identify the process to be monitored. (In this case, appointments)

2. Identify the variable to be monitored. (In this case, no-shows)

3. Measure historical data.

4. Construct an R chart and an Xbar chart.

5. Plot the data.

6. Evaluate the data looking for signals, trends and outliers.

7. Construct a revised control chart without the outliers.

8. Use revised control limits to predict process control.

9. Monitor the process.

10. Conduct causal analysis when necessary.

Statistical programs make it simple to construct control charts. However, a practitioner can construct a chart manually using the following formulas. The factors D3, D4, and A2 for control charts are available in statistical references. It is important to understand that Range charts plot the variation and Xbar charts plot the mean of the sample.

Equation for R chart

Upper Control Limit = D4R(bar)

Centerline=R(bar)

Lower Control Limit=D3R(bar)

Equation for Xbar chart

Upper Control Limit = X(double bar)+A2R(bar)

Center Line = Mean

Lower Control Limit = X(double bar)-A2R(bar)

The resulting R and Xbar charts for no-shows are shown in Fig 1 and Fig. 2.

Fig. 1

[pic]

Fig. 2

[pic]

In this case the R chart indicates that the month of March, 2004 plots outside the control limits; it is an outlier. The variation in the means was excessive. In order to plot accurate control limits for the Xbar chart (Fig. 2) it will be necessary to remove the outliers and replot the control limits. Fig 3 and Fig 4 show the results following removal of the outlier.

Fig. 3

[pic]

Fig. 4

[pic]

The upper control limit (UCL) and the lower control limit (LCL) in Fig. 4 define expected statistical variation. The centerline is the mean of the means or X (double bar), in this case 1.25. Historical data is used to establish control limits. The process can now be continually monitored for variations that exceed chance variation. In Fig. 4 no-shows for the month of January, 2004 exceed the upper control limit of 2.33 and would be investigated further.

A number of additional tools are available to evaluate processes using control charts. Warning limits can be plotted inside the upper and lower control limits triggering additional evaluation. Patterns and trends can be evaluated to monitor statistical control (Montgomery, 2001).

Control charts provide a means to separate random statistical variation from variation with assignable causes. In the above example, a practitioner would conduct a causal analysis to attempt to determine the root cause of any variance that exceeds the control limits. In this example a number of possibilities could be explored that might explain variation. Environmental factors such as weather, the economy, or local events may have an effect on the no-show rate. Business practices, such as the performance of an answering service, performance of telephone equipment, reminder notices, or fees may be a factor. In addition, there are clinical factors such as counselor performance, diagnosis, referral source, and type of intervention that may be a factor.

A number of statistical tools are available to assist with causal analysis. For example, a practitioner may produce additional charts to analyze the relationship between no-show rate and the length of time between sessions, or the relationship between no-shows and completed appointments. A series of such comparisons may help isolate the assignable cause. The purpose of control charts is to alert the practitioner to the need to investigate further and eliminate unnecessary investigation and responses to variations that result from chance.

There are a number of challenges to implementing control charts in small human service applications. However, these challenges can be overcome and the use of control charts can provide a valuable window into key processes and become a valuable tool in a quality improvement effort.

References

Barnard, B.K. (2004). Implications of personal recovery history for training and development of addiction treatment workers. Master’s thesis, Eastern Illinois University. Charleston, Illinois.

Besterfield D.H., Michna, C.B., Besterfield, G. H. & Sacre, M.B., 2003). Total quality management (3rd ed.) Upper Saddle River, New Jersey: Pearson Educational

Dey, M.L., Sluyter, G.V., Keating J.E. (1994). Statistical process control and direct care staff performance. Mental Health Administration, 21(2), 201-209.

Hantula, D.A. (1995). Disciplined decision making in an interdisciplinary environment: some implications for clinical practice. Journal of Applied Behavior Analysis, 28(3), 371-378.

Montgomery, D.C. (2001). Introduction to statistical quality control (4th ed.) New York: Wiley & Sons.

Morton, A.P., Whitby, M., NcLaws, M.L., Dobson, A., McElwain S., Looke D., et al. (2001). The application of statisitical process control charts to the detection and monitoring of hospital-acquired infections. Journal of Quality in Clinical Practice 21, 112-117.

O’Con, R. (1997). A brief introduction to statistical process control. Tech Directions, 57(3), 3-7.

Orme, J.G. & Cox, M.E. (2001). Analyzing single-subject design data using statistical process control charts. Social Work Research, 25(2), 115-127.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download