Lab 1 97



Trinity College, Dublin

Generic Skills Programme

Statistics for Research Students

Laboratory 3: Control Charts and Statistical significance

To complete the laboratory exercise, work your way through this handout, which is self contained and self explanatory. Work in pairs (two per machine), and learn from each other. Keep separate logs of your work. The tutor is available to help with technicalities and discuss substantive issues if necessary.

Invitations to consider the results of Minitab analysis and their statistical and substantive interpretations are printed in italics. Take some time for this; consult your neighbour or tutor. Enter your responses in a Word document, as if draft contributions to a report on the experiment and its analysis.

Topics:

1. Control charts; preliminary analysis and set-up

2. Simulating sampling distributions

3. One-sample significance tests and control charts

4. Application to paired comparisons

Control charts are used in two modes, ongoing monitoring of process measurements and analysis of historical process data. The latter may be used at the initial stage of setting up ongoing process monitoring, with the results of the analysis being used to determine the parameters of the control chart for ongoing monitoring. Historical analysis should also be used on a regular basis to assess overall recent process performance, with a view to identifying actual or potential problems and thereby setting up opportunities for improvement.

Part 1 of today's Laboratory is based on an example of the use of historical analysis of regularly sampled process data to learn about process variation and to determine the set up for ongoing monitoring.

The performance of the control chart in ongoing process monitoring may be assessed theoretically through the sampling distribution of the statistic of interest. Alternatively, simulation of ongoing process performance may be used as a basis on which to assess the performance of the control chart, effectively simulating the sampling distribution and assessing its characteristics through observation. This is undertaken in Part 2.

In Part 3, the correspondence between the logic of control chart monitoring and statistical significance testing is explored and the use of statistical significance testing in historical control chart analysis is demonstrated. There are theoretical assumptions concerning the process that generated the data regarding Normality, homogeneity and independence of the sampled data. Diagnostic analysis is used to assess these assumptions.

In Part 4, the use of the simple one-sample test which is the basis for control chart monitoring is applied in an experimental setting, where the "matched pairs" design is used as a highly effective approach to investigating the effect of changing a factor thought to influence a process. The test is applied to the difference resulting from the paired comparisons arising from the implementation of the design.

Learning Objectives:

Be able to

– use Minitab to create control charts from subgrouped data

– interpret the resulting charts in context and decide on relevant action

– use Minitab to create control charts for subsets of the data

– explain the basis for choosing control chart limits in terms of chart behaviour in repeated sampling

– simulate repeated samples and observe control chart behaviour

– create suitable graphical summaries of control chart behaviour to display the effect of increasing sample size

– use the Minitab graph editor to improve display intelligibility

– provide informative interpretive comments on the results of the graphical analysis, with specific reference to standard error

– use Minitab to reconfigure subgrouped data in a single column as individual subgroups in separate columns

– use Minitab to apply a one sample Z test to the data in the individual subgroups

– calculate a one sample Z test using the standard formula

– verify the correspondence between the Minitab Z test, the formula based Z test and the Xbar control chart test, with specific reference to correspondence between critical values and correspondence between significance levels

– use Minitab to group relevant subgroups and use a t test to test a relevant hypothesis separately for each group

– verify the numerical results of the tests by calculation using the relevant formula

– provide formal reports on the results of the t tests

– explain the effects of varying mean, standard deviation and sample size on the result of a t test

– calculate appropriate residuals, produce appropriate graphs and use appropriate graphical analysis to check the assumptions underlying the validity of control chart analysis

– produce, edit and interpret profile plots of paired sample measurements and corresponding differences

– produce, edit and interpret a scatter plot of differences of pairs versus averages of pairs and a Normal plot of differences

– use Minitab to test the statistical significance of the mean difference, verify the calculations and make a formal report on the result of the test.

1. Control charts; preliminary analysis and set-up

As part of a preliminary study of the bill-paying behaviour of its customers, a mail order company collected data on the number of days to collection of accounts receivable. Rather than observe every account, the first five accounts received on Monday morning each week were taken as a sample of the process. After 30 weeks, the data were reviewed. The first step was to assess the stability of the process over the 30 week period. For this purpose, X-bar and s charts were made using Minitab.

The data are available in the Days to Payment data set in the GenericSkillsData folder. Copy the data to Minitab and create control charts as follows:

• from the Stat menu, select Control Charts, then Variables Charts for Subgroups, then Xbar-s Chart,

• in the top right window of the dialog box, select "Observations for a subgroup are in one row of columns":

• tab to the next window, then select 'Sample 1'-'Sample 5' from the variables list,

• click the Xbar-S Options button, click the Estimate tab, uncheck the "Use unbiasing constant" box[1],

• click OK, OK.

Discuss the stability of the days to payment process with regard to

(a) level

(b) spread

What is your next step in the analysis?

As the variation in Days to Payment process appeared stable during the last 22 weeks, it was decided to monitor the process on an ongoing basis, using control limits based on the recent data. To establish these, charts based on the recent data were produced, using the following procedure:

• from the Edit menu, select Edit Last Dialog (or press Ctrl+E),

• click on Data Options,

• check "Specify rows to include", then "Rows that match", click "Condition",

• enter Week > 8 as the Condition,

• click OK, OK, OK.

Confirm that the control limits for the Xbar chart conform to the formula CL ( 3σ/(n.

List values for centre lines and control limits to use for further process monitoring. Use sensible rounding.

2. Simulating sampling distributions

The basis for interpreting control charts is the notion of sampling distribution. In the context of process sampling, this requires us to think of continuing sampling indefinitely. Then, if the process is in control, very few samples will give a value of X-bar outside the control limits; in the long run, the Normal model indicates that less than 0.3% of X-bar values will be outside the control limits. If, in the short term, we observe such a point, we are inclined to believe that it reflects an assignable cause of variation rather then chance causes associated with the Normal model. Put another way, assuming the process is in control makes observing a point outside the control limits improbable, so that observing such a point makes the in control assumption implausible.

We can verify these properties of the X-bar chart by simulation, as follows:

• from the File menu, select New, then Minitab Project,

• click No, OK.

• generate 30 rows of Normal data in columns C1-C5, with mean and standard deviation set to the values you prescribed at the end of Part 1,

• name the columns Sample 1 – Sample 5

• use Calc / Row Statistics to calculate the means of each row of five values, each row representing a simulated sample of 5, store in C6,

• name C6 as Xbar

• count the number of values of Xbar outside the control limits you prescribed at the end of Part 1,

Discuss the result of this simulation.

For a more informative view of the results of the simulation,

• make a histogram of the 30 sample means, with fitted Normal curve.

Comment on the result.

The next step is to increase the size of the simulation to 300 samples and then 3,000 samples. Counting out of control values by hand becomes impractical. Minitab can be made to count the values outside the control limits, as follows:

• from the Calc menu, select Calculator,

• Store result in variable C7,

• from the Function menu, select the If function,

• as the test, enter C6 < LCL OR C6 > UCL, where LCL and UCL are the control limit values you prescribed for the Xbar chart at the end of Part 1,

• substitute 1 for value_if_true,

• substitute 0 for value_if_false,

• check the Assign as a formula box,

• click OK,

• name C7 as OutOfControl,



• from the Calc menu, select Calculator,

• Store result in variable C8,

• clear the Expression window and, from the Function menu, select the Sum function,

• enter C8 as the number in the Sum function,

• check the Assign as a formula box,

• click OK,

• name C8 as Count

Check the result.

Next, repeat the simulation with 300 samples (rows), then 3,000 samples, in C1 – C5, and make a histogram of Xbar in each case. Note that the dialog windows for each command need no change, apart from the number of simulations initially. This means that you can go straight to the final OK button. Also, the out-of-control count is automatic, as a result of checking the "Assign as a formula" box.

Compare the results of the simulations.

What out-of-control count did you get in each case? Keep a record.

What did you expect?

How do the histograms compare?

Repeat the simulation with 3000 repetitions 4 times, recording the out-of-control count each time.

In the data sheet circulated in class, enter the out-of-control counts for all 5 simulations based on 3,000 repetitions, for later discussion.

Simulating the effect of increasing sample size

Increasing the size of the samples on which the control charts are based provides more information about the process being sampled and, therefore, should increase the precision of our conclusions regarding the process being sampled. Simulation can be used to study this effect. Here, we will compare histograms of means of 1000 samples of sizes 1, 5, 10, 20, 40, as follows:

• generate into C1-C40 1000 rows of Normal data with mean and standard deviation set to the values you prescribed at the end of Part 1,

• use the Row Statistics command to calculate the means of C1-C5, C1-C10, C1-C20 and C1-C40, respectively, and store the results in C41, C42, C43 and C44, respectively,

• name C40 as "n = 1", C41 as "n = 5", C42 as "n = 10", C43 as "n = 20", C44 as "n = 40",

• make histograms of C40-C44 with fitted Normal curves,

• select Scale and delete all Y scale axes and ticks and the high X scale axis,

• select Multiple Graphs, check On separate graphs and Same X scale for graphs,

• click OK.

The histograms are cascaded, with the last (n = 40) showing on top. Successively show the other histograms by clicking on them in turn, back to the first (n = 1).

Optionally, use the layout tool to improve comparability:

• from the Editor menu, select the Layout tool,

• set Rows = 5, Columns = 1,

• transfer Histogram of n = 1 to the first cell of the layout (clearing other histograms first if necessary), then successively transfer the histograms for n = 5, n = 10, n = 20, n = 40,

• click Finish,

• use Edit Graph Regions to set Graph Size, setting Width to be half Height and ensuring that Zoomed Size is set to Fit Window,

• drag the Layout window by its Title bar, as high as possible,

• resize the Layout window by dragging its bottom right corner as low as possible, while making it fit the graph region (no grey area showing),

• select and delete all figure titles and Y axis titles, leaving X axis titles (n = 1, etc.) as they are.

Comment on the effect of increasing sample size, referring to

– histogram spread,

– values of StDev,

– values of Mean.

Compare

– the values of StDev for n = 5 and n = 20,

– the values of StDev for n = 10 and n = 40,

noting that the larger sample size is 4 times the smaller in both cases.

Explain your comparisons in terms of the formula σ/(n for the standard error of the sample mean

3 One-sample significance tests and control charts

The logic of control chart analysis closely parallels that of statistical significance tests. The two activities are identical in the case that the control chart is being used to monitor the adherence of a process to a standard or a specification[2]. To illustrate the latter, data sampled from a mechanical process to manufacture a metal clip such as that illustrated below will be studied. The critical measurement being monitored is the "clip gap", indicated by the arrowed line.

[pic]

According to the engineering specification, the clip gap is required to be between 50 mm and 90 mm[3].

The process is sampled every two hours, with 5 clips being sampled and measured. Data for 25 successive samples of 5 measurements may be found in the Clip gaps dataset in the GenericSkillsData folder and are tabled below.

Sample |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 | | | |65 |75 |75 |60 |70 |60 |75 |60 |65 |60 |80 |85 | | | |70 |85 |80 |70 |75 |75 |80 |70 |80 |70 |75 |75 | | | |65 |75 |80 |70 |65 |75 |65 |80 |85 |60 |90 |85 | | | |65 |85 |70 |75 |85 |85 |75 |75 |85 |80 |50 |65 | | | |85 |65 |75 |65 |80 |70 |70 |75 |75 |65 |80 |70 | | | | | | | | | | | | | | | | | |Sample |13 |14 |15 |16 |17 |18 |19 |20 |21 |22 |23 |24 |25 | | |70 |65 |90 |75 |75 |75 |65 |60 |50 |60 |80 |65 |65 | | |70 |70 |80 |80 |85 |70 |65 |60 |55 |80 |65 |60 |70 | | |75 |85 |80 |75 |70 |60 |85 |65 |65 |65 |75 |65 |70 | | |75 |75 |75 |80 |80 |70 |65 |60 |80 |65 |65 |60 |60 | | |70 |60 |85 |65 |70 |60 |70 |65 |80 |75 |65 |70 |65 | |

Given the engineering specifications, a plausible target mean value for this process is 70 mm. Historical data suggest a standard deviation of 8. With this information, a control chart for monitoring the process may be produced as follows:

• from the File menu, select New, then Minitab Project,

• click No, OK.

• copy the data from the GenericSkillsData folder to Minitab,

• from the Stat menu, select Control Charts, then Variables Charts for Subgroups, then Xbar-s Chart,

• in the top right window of the dialog box, select "All observations for a chart are in one column":

• tab to the next window, then select Clip gap from the variables list,

• enter 5 as subgroup size,

• click the Xbar-S Options button,

• click the Parameters tab and enter 70 for the mean and 8 for the standard deviation,

• click the Estimate tab and ensure the "Use unbiasing constant" box is unchecked,

• click OK, OK.

In real time, the chart you have produced would build up sample by sample, one every two hours. Here, all samples have been charted at once.

You can monitor the process, sample by sample, by carrying out tests of significance on the successive samples. First, put the separate samples in separate columns:

• from the Data menu, select Unstack Columns,

• click in the Unstack the data in: window and select the Clip gap column,

• click the Using subscripts in: window and select Sample,

• select After last column in use and check Name the columns containing the unstacked data,

• click OK.

Next, test the hypothesis that the process is on target using one of the samples represented in the Xbar chart, say Sample 5:

• from the Stat menu, select Basic Statistics, then 1-Sample Z,

• click in the Samples in columns: window and select Clip gap_5,

• enter 8 for Standard deviation, check Perform hypothesis test, enter 70 for Hypothesised mean:,

• click OK.

What is the value of Z?

What is the value of P?

What conclusion do you draw?

Correspondence between control chart test and significance test

You have used a Z test here to emulate the control chart test. The formula for the [pic]chart control limits is

CL ( 3σ/(n, that is, 70 ( 3×8/(5

The formula for Z is

[pic].

Note that the relevant sampling distribution for Z is the standard Normal distribution, as the value for σ is assumed to be the known value of 8.

Verify the values of the control limits as shown on the [pic] chart.

Verify the value of the Z statistic as shown in the Session window.

Verify the correspondence between the control chart test and the Z test.

What critical value for Z is needed to ensure the correspondence?

What is the significance level corresponding to this critical value?[4] Use the Normal table and / or the Minitab Normal cumulative distribution function (Calc menu).

Check the comparison of the P-value shown in the Session window with the significance level of the Z test and verify the conclusion of the Z test.

Repeat the Z test for Sample 15; repeat the verification exercise.

An extension of the control chart Z test

A review of the Xbar chart suggests a process shift after Sample 17; most of the points up to sample 17 are above the centre line, all the points after Sample 17 are below the centre line[5]. This is reinforced by re-analysing the (now historical) data in separate subsets. In Minitab, this is achieved by using "Staging", as follows:

• from the Stat menu, go to Xbar-s Chart, note existing set-up,

• click the Xbar-S Options button,

• click the Parameters tab and clear 70 as mean and 8 as standard deviation,

• click the Estimate tab and ensure the "Use unbiasing constant" box is unchecked,

• click the Stages tab, enter Sample as the variable to start a new stage and 18 as the value to start a new stage,

• click OK, OK.

Interpret the revised charts.

Comment on the non-significance of Sample 15.

It appears that the process is stable within each subset but centred at a mean value that may be different from 70. While the individual samples may not have suggested this (apart from Sample 15), we are now in a position to test the "on-target" hypothesis within each subset based on combined samples of historical data. These combined samples are much larger than the size 5 samples used in the ongoing monitoring.

Implement these tests as follows, first using the Stack command to combine the individual samples into the relevant subsets:

• from the Data menu, select Stack, then Columns,

• in the "Stack the following columns:" window, select Clip gap_1 to Clip gap _17,

• select "Column of current worksheet:" and enter C28,

• click OK,

• in the data sheet window, name C28 as "Before",

• stack the remaining samples in C29 and name as "After",

• from the Stat menu, choose Basic Statistics, then 1-Sample t,

• click in the "Samples in columns:" window and select Before and After,

• check Perform hypothesis test and enter 70 as the Hypothesised mean,

• click OK.

The column names "Before" and "After" refer to before and after the process adjustment referred to in Footnote 5.

What is the result of the t-test applied to the "Before" data? Report formally in terms of

Null hypothesis

Test statistic

Calculated value

Critical value

Comparison

Conclusion

Confirm the values of N, SE Mean and T for the "Before" data. How many degrees of freedom did you associate with T?

Repeat the above analysis for the "After" data.

Explain why the deviation of the "Before" data from 70 appears considerably more significant than that of the "After" data. What factors influence this difference between the two tests?

Diagnostic analysis

Before accepting the results of the two significance tests as reported (and, indeed, the application of the control chart methodology), we need to be assured that the assumptions underlying the analysis were valid. A range of diagnostics is available for this purpose.

To assess the homogeneity of spread, the s chart may be used.

Review the s charts you constructed earlier and comment on the "constant standard deviation" assumption.

Independence may be assessed by making time series charts, as follows:

• from the Graph menu, select Time Series Plot, then Simple,

• select Before as the Series variable,

• click OK,

• repeat for the "After" variable.

Describe the variation pattern(s) you see in these data.

Are there any patterns that would undermine an assumption of pure chance variation?

The assumption of Normality may be checked using Normal diagnostic plots:

• make Normal diagnostic plots of the "Before" and "After" variables.

Describe the variation pattern(s) you see in these data.

Are there any patterns that would undermine an assumption of Normality?

Diagnostic analysis using Residuals

These diagnostic analyses may be applied to the entire data set by first calculating "residuals", that is, by subtracting the appropriate subset mean from the values in each subset so that both subsets, so adjusted, have a common mean of 0, and then recombining the subsets into one again, as follows:

• from the Calc menu, select Calculator,

• Store result in variable C30,

• in the Expression window, select the "Before" variable, enter a minus, then select function MEAN and select the "Before" variable again, click OK,

• repeat for the "After" variable, storing the result in C31,

• stack columns C30 and C31 in C32,

• make a time series plot and a Normal plot of C32.

Describe the variation pattern(s) you see in these data.

Are there any patterns that would undermine an assumption of pure chance variation or of Normality?

4. Application to Paired Comparisons

Paired comparisons arise from a very effective form of experimental design, the matched pairs design, for evaluating the effect of changing a factor of interest. Testing the statistical significance of the factor effect in such a design reduces to a one sample test. This set up is illustrated here via a study designed to assess the effect of cigarette smoking on the aggregation of platelets in the blood, a key factor in the formation of blood clots.

Blood was taken from 11 subjects before and after they smoked a cigarette and the percentage of platelets aggregated in each sample was measured, with the following results.

Subject |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 | |Before |25 |25 |27 |44 |30 |67 |53 |53 |52 |60 |28 | |After |27 |29 |37 |56 |46 |82 |57 |80 |61 |59 |43 | |

A more informative view is afforded by a profile plot, which allows a comparison of the Before and After profiles across all subjects. The Line Plot command in the Graph menu may be used to produce this, as follows:

• from the File menu, select New, then Minitab Project,

• click No, OK.

• copy the data from the Platelets dataset in the GenericSkillsData folder to Minitab,

• from the Graph menu, select Line Plot, then Series in Rows or Columns (With Symbols), click OK,

• select Before and After as the Graph variables,

• select Subject as the Label column,

• select "Each column forms a series",

• click OK,

• double click on the chart title and change "Line" to "Profile",

• double click the Y axis title and rename it as "Platelets, per cent".

Comment on the variation pattern in the graph, or lack thereof, with particular attention to correspondences between pairs of measurements on subjects.

Given the correspondence pattern, this graph may be enhanced by adding the subject differences:

• use the Minitab Calculator to enter the differences, After minus Before, in the next available column, name as Difference,

• repeat the Line Plot command, this time adding Difference to the list of variables to be plotted,

• click OK.

Comment on the variation pattern.

How does the range of variation of the differences relate to the ranges of variation of the measurements?

How does the size of the differences relate to the size of the measurements?

To answer the last question, a more powerful view is provided by a graph of the subject differences against the subject averages, constructed as follows:

• use the Row Means command from the Calc menu to enter the subject (row) means of Before and After in the next available column, name as Means,

• from the Graph menu, select Scatterplot, then Simple,

• select Difference as the Y variable, Means as the X variable,

• click OK.

Comment on the variation pattern.

How does the size of the differences relate to the size of the measurements?

How do you regard the largest difference?

A Normal diagnostic plot provides a better assessment of the size of the largest difference.

• Produce a Normal plot of the differences.

Comment on the Normality of the differences and on the exceptional (or otherwise) status of the largest difference.

Note the similarity of the next three largest differences.

Having investigated the validity of the standard assumptions for testing these data, admittedly with a rather small sample, we now proceed to test the significance of the differences between After and Before, as follows:

• use the Basic Statistics command from the Stat menu to implement a one-sample t-test of the significance of the differences from 0,

• alternatively (or in addition), use the paired t option.

Confirm the values of SE Mean, T and P.

Make a formal report of the result, as previously (see page 9).

Conclusion

This concludes Laboratory 3. The learning objectives listed at the outset are reproduced here. Check them individually and ensure that you have achieved each one; seek help from the Tutor if necessary.

Learning Objectives:

Be able to

– use Minitab to create control charts from subgrouped data

– interpret the resulting charts in context and decide on relevant action

– use Minitab to create control charts for subsets of the data

– explain the basis for choosing control chart limits in terms of chart behaviour in repeated sampling

– simulate repeated samples and observe control chart behaviour

– create suitable graphical summaries of control chart behaviour to display the effect of increasing sample size

– use the Minitab graph editor to improve display intelligibility

– provide informative interpretive comments on the results of the graphical analysis, with specific reference to standard error

– use Minitab to reconfigure subgrouped data in a single column as individual subgroups in separate columns

– use Minitab to apply a one sample Z test to the data in the individual subgroups

– calculate a one sample Z test using the standard formula

– verify the correspondence between the Minitab Z test, the formula based Z test and the Xbar control chart test, with specific reference to correspondence between critical values and correspondence between significance levels

– use Minitab to group relevant subgroups and use a t test to test a relevant hypothesis separately for each group

– verify the numerical results of the tests by calculation using the relevant formula

– provide formal reports on the results of the t tests

– explain the effects of varying mean, standard deviation and sample size on the result of a t test

– calculate appropriate residuals, produce appropriate graphs and use appropriate graphical analysis to check the assumptions underlying the validity of control chart analysis

– produce, edit and interpret profile plots of paired sample measurements and corresponding differences

– produce, edit and interpret a scatter plot of differences of pairs versus averages of pairs and a Normal plot of differences

– use Minitab to test the statistical significance of the mean difference, verify the calculations and make a formal report on the result of the test.

-----------------------

[1] Division by n – 1 in the denominator of s eliminates the bias in s2 as an estimate of sð2, but this does not apply to s; division by n

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download