Analysis of Experiments with Random Effects



Analysis of Experiments with Random Effects

Example 1 – Variation in looms in textile manufacturing

How much variation in fabric strength is due to the fact different looms are used to produce the fabric? To answer this question plant engineers randomly sampled four looms (from many) at the plant and tested the fabric strength of n = 4 fabric samples from each. The data entered into JMP is shown below.

[pic]

The random effects model for these data is given by:

[pic] [pic](looms) and [pic](replicates)

where we assume,

[pic]) and [pic]

This implies that the total variation in the fabric strengths is

[pic]

We want to estimate both variance components given the data, and look at what percentage of the variation of the total variation can be attributed to the fact that different looms are used produce fabric in the factory.

Analysis in JMP

Select Analyze > Fit Model and put Strength in the Y box and Loom in the effects in model box. The critical step is to highlight Loom in the model effects box and select Random Effect from the Attributes pull-down menu as shown below.

[pic]

Results of E(MS) Method for Estimating Variance Components

[pic]

If we use the REML method we basically the same estimate along with a 95% CI for[pic].

[pic]

The CI is too wide to be useful! To get precise estimates of variance components much largersample sizes are needed.

Confidence Interval for Percent of Variation Due to an Effect (pgs. 489-490)

100(1 - α)% CI for [pic] is given by:

[pic]

where

[pic]

Note: For unequal replicates replace [pic] by the [pic]whose formula is given by equation (13-9) on pg. 487.

Constructing a 95% CI for [pic] for the loom data we first need the F-quantiles which can be calculated using either the file F-quantile Calculator.JMP on the class server or by using tables in the F-table in the appendix of your text. To find the upper F-quantile using the table you have to use the fact that

[pic] (see pg. 490 for an example)

[pic]

Thus [pic] and [pic]

Thus we have,

[pic]

which gives,

[pic] and finally [pic].

So looms account for between 38% and 98% of the total variation in fabric strength. Again this interval is very wide because of the small number of replicates, however we certainly know that loom to loom variation is not negligible.

Factorial Experiments with Random Effects

Example 1: Gage R & R Study

Schematic

[pic]

[pic]

To conduct the analysis in JMP set up the effects as you would for a two-factor factorial design making sure to change each effect to random as shown below.

[pic]

Output from E(MS) approach to estimating the variance components.

[pic]

E(MS) method estimates after dropping the operator*part interaction.

[pic]

REML estimates of variance components, notice [pic] is zeroed out.

[pic]

The Gage R & R study here produced results that indicate almost all of the variation came from part to part variation,[pic]. The fact different operators were used accounted for less the one-tenth of a percent of the total variation and the repeatability variance component was approximately 8%. This measurement system seems good, now we can focus on eliminating part to part variability by using designed experiments to identify potential sources of that variation.

Example 2 – Turbine Experiment (Example 13-7 pg. 507)

This is a three factor experiment with factor A (Gas temperature) is fixed and factors B (operator) & C (pressure gauge) are random.

[pic]

Fitting the model we set up as we would for a three-factor factorial experiment and change all effects involving B and/or C to random effects as shown below.

[pic]

[pic]

The E(MS) differ from those derived on the board because JMP uses the unrestricted approach when handling random interaction effects. We can still use the restricted approach, however we have to do the testing and variance component estimation “by hand” using the Mean Squares returned by JMP.

[pic]

Which approach is best to use? If we are willing to carefully write out the E(MS) by hand it does not take to much extra effort analyze using the restricted model with JMP. If you are not willing to painstakingly write out the E(MS) for your mixed model then I would use the unrestricted approach and go with all of the results returned by JMP. Many statistical software packages offer the use of either the restricted or unrestricted approach, e.g. MINITAB.

-----------------------

We have to methods of estimation at our disposal. The E(MS) approach which is used extensively in the text, and the REML (restricted maximum likelihood) method which is discussed in section 13-7.3 of your text. For now we will concern ourselves with the E(MS) method.

p-value = .0002 ( Variation in the response due to looms is statistically significant.

E(MS) Table

[pic]

[pic]

[pic]

This estimate comes directly from the E(MS) above, thus the name of the estimation method.

% variation due to looms = 78.6%

% variation due to error = 21.4%

[pic]

[pic]

[pic]

[pic]

We set up the spreadsheet in JMP as shown to the left. We need one column to denote operator (1,2,or,3), one column for part being measured (1 – 20), and the reading or measurement made. The trial column will not be used in the actual analysis. Each operator measured each part twice and trial denotes which measurement it is 1st or 2nd.

Notice that the estimate for [pic] is negative! This is impossible because variances by definition are positive! However, we see that the interaction between operator and part is not significant (p = .8614). The E(MS) approach for estimating variance components will often times lead to negative estimates for NON-SIGNIFICANT effects. The best thing to do here is drop the non-significant terms and re-run the model or use the REML approach which will not give negative variance component estimates, becau-9e˜ ™ š › Ï Ð Ñ Ò å æ ç è ê ë þ ÿ [pic]

!"#$12CDEFYöèÜØѺضخ؟’®Ø®Øƒv®Ø®ØgZ®Ø¶Ø¶Ø®ØjÒShý?hý?EHöÿU[pic]j÷8ýE[pic]hý?CJU[pic]V[pic]aJjŒQhý?hý?EHöÿU[pic]jØ8ýE[pic]hý?CJU[pic]V[pic]aJj-Ohý?hý?EHòÿU[pic]j©8ýE[pic]hý?CJU[pic]V[pic]aJjhý?U[pic]h²"ø

hý?CJaJjhýse it will essentially drop the effect yielding negative variance component estimates from the model.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download