Comparing Group Means using Regression



Lecture 9 Qualitative Independent VariablesComparing means using Regression(I don’t need no stinkin’ ANOVA)In linear regression analysis, the dependent variable should always be a continuous variable. On the other hand, the independent variables do not have to be continuous – they can be categorical.This lecture shows how categorical / qualitative variables – variables whose values represent different groups of people, not different quantities, can be incorporated into regression analyses, allowing comparison of means of the groups.We’ll discover that IVs with only 2 values can be treated as if they are continuous IVs in any regression.But IVs with 3 or more values must be treated specially. Once that’s done, they also can be included in regression analyses.Regression with a single two-valued (dichotomous) predictorAny two-valued independent variable can be included in a simple or multiple regression analysis. The regression can be used to compare the means of the two groups yielding the same conclusion as the equal-variances independent groups t-test.Example: Suppose the performance of two groups trained using different methods is being compared. Group 1 was trained using a Lecture only method. Group 2 was trained using a Lecture+CAI method. Performance was measured using scores on a final exam covering the material being taught. So, the dependent variable is PERF – performance in the final exam. The independent variable is TP – Training program: Lecture only vs. Lecture+CAI.The data followID TP PERF 1 1 37 2 1 69 3 1 64 4 1 43 5 1 37 6 1 54 7 1 52 8 1 40 9 1 61 10 1 48 11 1 44 12 1 65 ID TP PERF 13 1 57 14 1 50 15 1 58 16 1 65 17 1 48 18 1 34 19 1 44 20 1 58 21 1 45 22 1 35 23 1 45 24 1 52 ID TP PERF 25 1 37 26 2 53 27 2 62 28 2 56 29 2 61 30 2 63 31 2 34 32 2 56 33 2 54 34 2 60 35 2 59 36 2 67 37 2 42 ID TP PERF 38 2 56 39 2 61 40 2 62 41 2 72 42 2 46 43 2 64 44 2 60 45 2 58 46 2 73 47 2 57 48 2 53 49 2 43 50 2 61How should the groups be coded?In the example data, Training program (TP) was coded as 1 for the Lecture method and 2 for the L+CAI method. But any two values could have been used. For example 0 and 1 could have been used. Or, 3 and 47 could have been used. When the IV is a dichotomy, the specific values used to represent the two groups formed by the two values of the IV are completely arbitrary. When one of the groups has whatever the other has plus something else, my practice is to give it the larger of the two values, often 0 for the group with less and 1 for the group with more.When one is a control and the other is an experimental group, my practice is to use 0 for the control and 1 for the experimental.Visualizing regressions when the independent variable is a dichotomy.When an IV is a dichotomy, the scatterplot takes on an unusual appearance. It will be two columns of points, one over one of the values of the IV and the other over the other value. It can be interpreted in the way all scatterplots are interpreted, although if the values of the IV are arbitrary, the sign of the relationship may not be a meaningful characteristic. For example, in the following scatterplot, it would not make any sense to say that performance was positively related to training program. It would make sense, however, to say that performance was higher in the Lecture+CAI program than in the Lecture-only program.In the graph of the example data, the best fitting straight line has been drawn through the scatterplot. When the independent variable is a dichotomy, the line will always go through the mean value of the dependent variable at each of the two independent variable values.We’ll notice that the regression coefficient, the B value, for Training Program is equal to the difference between the means of performance in the two programs. This will always be the case if the values used to code the two groups differ by one (1 vs. 2 in this example).2105025316230Mean Perf for Method 200Mean Perf for Method 21200150468630Mean Perf for Method 100Mean Perf for Method 1SPSS Output and its interpretation.294973147673R-square is the proportion of variance in Y related to differences between the groups. Some say that R-square is the proportion of variance related to group membership. So in this example, 14% of variance of Y is related to group membership.00R-square is the proportion of variance in Y related to differences between the groups. Some say that R-square is the proportion of variance related to group membership. So in this example, 14% of variance of Y is related to group membership.Regression3672840338455As was the case with simple regression with a continuous predictor, the information in the ANOVA summary table is redundant with the information in the Coefficients box below.00As was the case with simple regression with a continuous predictor, the information in the ANOVA summary table is redundant with the information in the Coefficients box below.2537460114109500Interpretation of (Constant): This is the expected value of the dependent variable when the independent variable = 0. If one of the groups had been coded as 0, then the y-intercept would have been the expected value of Y in that group. In this example, neither group is coded 0, so the value of the y-intercept has no special meaning.570738010096500542544010096500Interpretation of B when IV has only two values . . .5593080161925100162560201619252002546354016192500B = Difference in group means divided by difference in X-values for the two groups.If the X-values for the groups differ by 1, as they do here, then B = Difference in group means.The sign of the B coefficient.The sign of the b coefficient associated with a dichotomous variable dependent on how the groups were labeled. In this case, the L Only group was labeled 1 and the L+CAI group was labeled 2. If the sign of the B coefficient is positive, this means that the group with the larger IV value had a larger mean.If the sign of the B coefficient is negative, this means that the group with the larger IV value had a SMALLER mean.The fact that B is positive means that the L+CAI group mean (coded 2) was larger than the L group mean (coded 1). If the labeling had been reversed, with L+CAI coded as 1 and L-only coded as 2, the sign of the b coefficient would have been negative.The t-valueThe t values test the hypothesis that each coefficient equals 0. In the case of the Constant, we don't care. In the case of the B coefficient, the t value tells us whether the B coefficient, and equivalently, the difference in means, is significantly different from 0. The p-value of .007 suggests that the B value is significantly different from 0. The bottom lineThis means that when the independent variable is a dichotomy, regression of the dependent variable onto a dichotomous independent variable is a comparison of the means of the two groups.Relationship to independent groups t.You may be thinking that another way to compare the performance in the two groups would be to perform an independent groups t-test. This might then lead you to ask whether you'd get a result different from the regression analysis. The t-test on the data follows.T-Test – 4442460153670This is what the Regression t is from the Coefficients table on the previous page.00This is what the Regression t is from the Coefficients table on the previous page.Note that the difference in means is 57.32 - 49.68 = 7.64.2659380109215200Note that the t-value is 2.792, the same as the t-value from the regression analysis. This indicates a very important relationship between the independent groups t-test and simple regression analysis:When the independent variable is a dichotomy, the simple regression of Y onto the dichotomy gives the same test of difference in group means as the equal variances assumed independent groups t-test.As we'll see when we get to multiple regression, when independent variables represent several groups, the regression of Y onto those independent variables gives the same test of differences in group means as does the analysis of variance. That is, every test that can is conducted using analysis of variance can be conducted using multiple regression analysis. Analysis of variance – a dinosaur methodology?Yes, the formulas taught to old folks like me are dinosaur formulas. No self-respecting computer program would use the ANOVA formulae taught in many (but fewer each year) older statistical textbooks. All well-written computer programs convert the problem to a regression analysis and conduct the analysis as if it were a regression, using the techniques to be shown in the following.But statistics is littered with dinosaurs. Among many analysts, regression analysis itself has been replaced by structural equation modeling a much more inclusive technique. Among other analysts, the kinds of regression analyses we’re doing have been replaced by multilevel analyses, again, a more inclusive technique in a different context.When you wake up tomorrow, statistical analysis will have changedComparing Three Group Means using Regression The problemConsider comparing mean religiosity scores among three religious groups – Protestants, Catholics, and Jews.Suppose you had the following dataReligionNa?ve Religion CodeReligiosityProt 16Prot 112Prot 113Prot 111Prot 19Prot 114Prot 112Cath 25Cath 27Cath 28Cath 29Cath 210Cath 28Cath 29Jew 34Jew 33Jew 36Jew 35Jew 37Jew 38Jew 32Obviously, we could compare the means using traditional ANOVA formulas.But suppose you wished to analyze these data using regression.One seemingly logical approach would be to assign the successive integers to the religion groups and perform a simple regression. In the above, the variable, RELCODE, is a numeric variable representing the 3 religions. Because it is NOT the appropriate way to represent a three-category variable in a regression analysis, we’ll call it the Na?ve RELCODE.The simple regression follows:Scatterplot of Religiosity vs. Na?ve RELCODE Below is a scatterplot of the “relationship” of STRENGTH to Na?ve RELCODE.-2279661538605Religiosity0Religiosity40386001560195This is mostly a page of crap because the analysis is completely inappropriate.00This is mostly a page of crap because the analysis is completely inappropriate.775970601345005048252350770NA?VE RELCODE00NA?VE RELCODERegressionLooks like a strong “negative” relationship.But wait!! Something’s wrong. <===== Not crap.For this analysis, I assigned the numbers 1, 2, and 3 to the religions Prot, Cath, and Jew respectively. But I could just as well have used a different assignment. How about Cath = 1, Prot=2, and Jew=3?The data would now look like3202305-99060This is another page of crap because the analysis is completely inappropriate.00This is another page of crap because the analysis is completely inappropriate.ReligionNew Na?veRelCodeReligiosityProt 26Prot 212Prot 21495425106680The scatterplot would be00The scatterplot would be13Prot 211Prot 29Prot 214Prot 212Cath 15Cath 17Cath 12330450145415008Cath 19Cath 110Cath 1129921185724Religiosity0Religiosity8Cath 19Jew 34Jew 33Jew 32055495110490NEW NA?VE RELCODE00NEW NA?VE RELCODE6Jew 35Jew 37Jew 38Jew 32The analysis would beRegressionWhoops! What’s going on? Two analyses of the same data yield two VERY different results. Which is correct? Answer: Neither is correct. In fact, there is nothing of use in either analysis.This is a great example of how a statistical analysis can go completely wrong.The problemQualitative Factors, such as religion, race, type of graduate program, etc. with 3 or more values, cannot be analyzed using simple regression techniques in which the factor is used “as-is” as a predictor. That’s because the numbers assigned to qualitative factors are simply names. Any set of numbers will do. The problem with that is that each different set of numbers will yield a different result in a simple regression.Note: If the qualitative factor has only 2 values, i.e., it’s a dichotomy, it CAN be used as-is in the regression. (So everything on the first couple of pages of this lecture is still true.) But if it has 3 or more values, it cannot.Does this mean that regression analysis is useful only for continuous or dichotomous variables? How limiting!!The solution – (thanks Mathematicians)1. Represent each value of the qualitative factor with a combination of two or more values of specially selected Group Coding Variables.They’re called group coding variables because each value of a qualitative factor represents a group of people. If there are K groups, then K-1 group coding variables are required. .2. Regress the dependent variable onto the set of group coding variables in a multiple regression.Group Coding VariablesThe question arises: What actually are the group coding variables? How are they created?There are 3 common types of group coding variables. (There are several other less common types.)1. Dummy Coding.2. Effects Coding.3. Contrast Coding. (We won’t cover this technique this semester. Covered in Advanced SPSS.)Dummy Variable CodingIn Dummy Variable Coding, one group is designated as the Comparison/Reference group. As a byproduct of the analysis, its mean is compared with the means of all the other groups.If K is the number of groups, then K-1 Dummy variables are created.The comparison group is assigned the value 0 on all Dummy Variables.Each other group is assigned the value 1 on one Dummy Variable and 0 on the remaining.Each group is assigned the value 1 on a different Dummy Variable.Examples . . .Two Groups (Even though we don’t actually need special techniques for two groups.)GroupDV1G11G20 = The Comparison GroupThree GroupsGroupDV1DV2G110G201G300 The Comparison GroupFour GroupsGroupDV1DV2DV3G1100G2010G3001G4000 The Comparison GroupFive GroupsGroupDV1DV2DV3DV4G11000G20100G30010G40001G50000 The Comparison GroupEtc.Because, as will be shown below, the regression results in a comparison of the means of the groups with “1” codes with the mean of the Comparison Group, this coding scheme is most often used in situations in which there is a natural comparison group, for example, a control group to be compared with several experimental groups.Example Regression Using Dummy Variable Coding Start here on 3/2718/The hypothetical data are job satisfaction scores (JS) of three groups of employees.5806440130810DC200DC245840657747000 JS JOB DC1 DC22790825158115Group 100Group 115049507429500 6 1 1 0 7 1 1 0 8 1 1 0 11 1 1 0 9 1 1 0 7 1 1 02857500121285Group 200Group 2 7 1 1 015144754699000 5 2 0 1 7 2 0 1 8 2 0 1 9 2 0 1 10 2 0 1 8 2 0 1 9 2 0 115335255778500 4 3 0 027717758890Group 3, the Comparison Group.00Group 3, the Comparison Group. 3 3 0 0 6 3 0 0 5 3 0 0 7 3 0 0 8 3 0 0 2 3 0 02014153975521002104030342749The REGRESSION Dialog00The REGRESSION Dialog49428401270635004952365114681000 Regression4107540974469This F tests the overall null hypothesis that there are no differences between the 3 population means. It’s the same value we would have obtained had we conducted an ANOVA.The F is significant, so reject the hypothesis that the population means are equal.00This F tests the overall null hypothesis that there are no differences between the 3 population means. It’s the same value we would have obtained had we conducted an ANOVA.The F is significant, so reject the hypothesis that the population means are equal.3445927155281When the predictors are group coding variables, we often say that R2 is the proportion of variance related to group membership.00When the predictors are group coding variables, we often say that R2 is the proportion of variance related to group membership.Interpretation of the Coefficients Box. Each Dummy Variable compares the mean of the group coded 1 on that variable to the mean of the Comparison group. The value of the B coefficient is the difference in means.So, for DC1, the B of 2.857 means that the mean of Group1 was 2.857 larger than the Comparison group mean.For DC2, the B of 3.000 means that the mean of Group2 was 3.000 larger than the Comparison group mean.4080681223834Each t tests the significance of the difference between a group mean and the reference group mean.t=2.907,p=.009 tests the significance of the difference between Group 1 mean and the Reference group mean.t = 3.052,p=.007 tests the significance of the difference between Group 2 mean and the Reference group mean.0Each t tests the significance of the difference between a group mean and the reference group mean.t=2.907,p=.009 tests the significance of the difference between Group 1 mean and the Reference group mean.t = 3.052,p=.007 tests the significance of the difference between Group 2 mean and the Reference group mean.So the mean of Group1 is significantly different from the Reference group mean and the mean of Group2 is also significantly different from the Reference Group mean.When is dummy coding used?When one of the groups is a natural control group for all the other groups.Effects Coding (called Deviation coding in SPSS)Effects coding is basically the same as Dummy Variable Coding with the exception that the comparison group code is switched from all 0s to all -1s. Two Groups (Remember, special coding is not actually needed, since there are two groups.)GroupCodeG11G2-1Three Groups – Special coding IS needed when you are comparing means of 3 or more groups.GroupGCV1GCV2G110G201G3-1-1Four GroupsGroupGCV1GCV2GCV3G1100G2010G3001G4-1-1-1Etc.The coding switch changes the interpretation of the B coefficients. Now, rather than representing a comparison of the mean of a “1” group with the mean of a comparison group, the B coefficient represents a comparison of the mean of a “1” group with the mean of ALL groups.Regression Example Using Effects Coding JS JOB EC1 EC215621008382000 6 1 1 02857500984885Group 200Group 2279082532385Group 100Group 1 7 1 1 0 8 1 1 0 11 1 1 0 9 1 1 0 7 1 1 0 7 1 1 015716255651500 5 2 0 1 7 2 0 1 8 2 0 1 9 2 0 1 10 2 0 1 8 2 0 1 9 2 0 1156210063500 4 3 -1 -127717758890Group 3: Comparison Group00Group 3: Comparison Group 3 3 -1 -1 6 3 -1 -1 5 3 -1 -1 7 3 -1 -1 8 3 -1 -1 2 3 -1 -13314700457835Alas, we can use REGRESSION to compare means, but it won’t report them for us. We have to use some other procedure, such as the REPORT procedure, if we want to actually seen the values of the means.00Alas, we can use REGRESSION to compare means, but it won’t report them for us. We have to use some other procedure, such as the REPORT procedure, if we want to actually seen the values of the means. Regression607325702945EC1, EC200EC1, EC216725902353945EC1, EC200EC1, EC24551045384175Everything in the top three boxes is the same as in the dummy variable analysis.The means are still significantly different. The F of 5.930 is EXACTLY the same value as we obtained using dummy coding and EXACTLY the same value we’d have obtained had we done an Analysis of Variance.00Everything in the top three boxes is the same as in the dummy variable analysis.The means are still significantly different. The F of 5.930 is EXACTLY the same value as we obtained using dummy coding and EXACTLY the same value we’d have obtained had we done an Analysis of Variance.Interpretation of the Coefficients Box.In Effects coding, each B coefficient represents a comparison of the mean of the group coded 1 on the variable with the mean of ALL the groups.So, for EC1, the B of .905 indicates that the mean of Group 1 was .905 larger than the mean of all the groups.For EC2, the B of 1.048 indicates that the mean of Group 2 was 1.048 larger than the mean of all the groups.There is no B coefficient for Group 3.56891591350644005697110155737900495763920565600569976011982450049549054051300061341001447800EC200EC260960001200150EC100EC16162675276225DC200DC2614362547625DC100DC14922520542925006028055542925005964555553720004305935702310004890770170180000599630517018000059328051712595004274185186118500The t of 1.594 indicates that the mean of Group 1 was not significantly different from the mean of all groups.The t of 1.846 indicates that the mean of Group 2 was not significantly different from the mean of all groups.Remember that these are the same data as above. It indicates that one form of analysis of the data may be more informative than another form. In this case, the Dummy Variable analysis was more informative.PerspectiveYou may recall that we considered a procedure for comparing means in the fall semester. It was the analysis of variance. Invoking ANOVA was a lot easier than creating group-coding variables and performing the regression analyses we’ve done here. Furthermore, using the analysis of variance procedure in SPSS automatically provided means and standard deviations of the groups, something we had to do as an extra step when using REGRESSION. Plus, the analysis of variance provides post hoc tests that aren’t available in regression. Here’s the output of SPSS’s ONEWAY analysis of variance procedure for the above data . . .Note that the F value (5.930) is exactly the same as the F value from the ANOVA table from the regression procedure.So why bother to use the regression procedure to compare group means?The answer is that if the comparison of a single set of group means were all that there was to the analysis, you would NOT use the regression procedure - you’d use the analysis of variance procedure.But here are four reasons for using or at least being familiar with regression-based means comparisons and the group coding variable schemes upon which they’re based.1. Whenever you have a mixture of qualitative and quantitative variables in the analysis, regression procedures are the overwhelming choice. Example: Are there differences in the means of three groups controlling for cognitive ability? Can’t do that without including cognitive ability, a quantitative variable in the analysis. Traditional analysis of variance formulas don’t easily incorporate quantitative variables. Once you’re familiar with group coding schemes, it’s pretty easy to perform analyses with both quantitative and qualitative variables.2. To increase efficiency, most statistical packages perform ALL analyses of both qualitative and quantitative and mixtures using regression formulas. When analyzing only qualitative variables they will print output that looks like they’ve used the analysis of variance formulas, but behind your back, they’ve actually done regression analyses. Some of that output may reference the behind-your-back regression that was actually performed. So knowing about the regression approach to comparison of group means will help you understand the output of statistical packages performing “analysis of variance”. We’ll see that in the GLM procedure below.3. Other analyses, for example Logistic Regression and Survival Analyses, to name two in SPSS, have very regression-like output when qualitative factors are analyzed. That is, they’re quite up-front about the fact that they do regression analyses. If you don’t understand the regression approach to analysis of variance, it’ll be very hard for you to understand the output of these procedures.4. It’s just cool to know how to do this.Doing the analyses using the GLM procedure.JS JOB 483108016510Note that there are no group-coding variables in the data that must be submitted to GLM.Hurray. Hurray!!Don’t need no stinkin’ GCVs.00Note that there are no group-coding variables in the data that must be submitted to GLM.Hurray. Hurray!!Don’t need no stinkin’ GCVs.72390060960Group 3: Comparison GroupGroup 1Group 200Group 3: Comparison GroupGroup 1Group 2 6 1 7 1 8 1 11 1 9 1 7 1 7 1 5 2 7 2 8 2 9 2 10 2 8 2 9 2 4 3 3 3 6 3 5 3 7 3 8 3 2 3 5562600982980Put names of qualitative factors in the Fixed Factor(s) field.Put names of quantitative factors in the Covariates field.00Put names of qualitative factors in the Fixed Factor(s) field.Put names of quantitative factors in the Covariates field.SAVE OUTFILE='C:\Users\Michael\Documents\JSExampleFor513.sav' /COMPRESSED.UNIANOVA JS BY JOB /METHOD=SSTYPE(3) /INTERCEPT=INCLUDE /POSTHOC=JOB(BTUKEY) /PLOT=PROFILE(JOB) /PRINT=ETASQ HOMOGENEITY DESCRIPTIVE OPOWER /CRITERIA=ALPHA(.05) /DESIGN=JOB. [DataSet0] C:\Users\Michael\Documents\JSExampleFor513.savBetween-Subjects Factors2178685-53975Descriptive StatisticsDependent Variable:JSJobMeanStd. DeviationN17.861.676728.001.633735.002.1607Total6.952.24721 00Descriptive StatisticsDependent Variable:JSJobMeanStd. DeviationN17.861.676728.001.633735.002.1607Total6.952.24721 NJOB172737Levene's Test of Equality of Error VariancesaDependent Variable:JSFdf1df2Sig..572218.574Tests the null hypothesis that the error variance of the dependent variable is equal across groups.a. Design: Intercept + JOBTests of Between-Subjects EffectsDependent Variable:JSSourceType III Sum of SquaresdfMean SquareFSig.Partial Eta SquaredNoncent. Parameter843280311785What’s this?00What’s this?770890608330Observed PowerbCorrected Model40.095a220.0485.930.011.3971.186E1732790-16510.815Intercept1015.04811015.0483.002E2.000.9433.002E21.000JOB40.095220.0485.930.011.3971.186E1.815Error60.857183.381Total1116.00021Corrected Total100.95220a. R Squared = .397 (Adjusted R Squared = .330)b. Computed using alpha = .055916304-1485426It’s regression stuff.00It’s regression stuff.Corrected Model: This is what is in the ANOVA box in regression. GLM regresses the dependent variable onto ALL of the group coding variables and quantitative variables, if there are any. This is the report of the significance of that regression.Intercepts: This is the report on the Y-intercept of the “All predictors” regression reported on in the line immediately above.These are signs of the behind-your-back regression analysis that’s actually been conducted.Prediction: At some time in your life, you’ll mistakenly think that the p-value for the Intercept is the p-value for your research hypothesis and incorrectly interpret your results. (I’ve done that. Curse you, SPSS,)JOB:The overall F again, this time for job, what we’re interested in. Note that no mention is made of the fact that two group-coding variables were created to represent JOB. The only indication that something is up is the 2 in the df column. That 2 is the number of actual group coding variables used to represent the JOB factor.Error: The denominator of the F statistic.Partial Eta squared: A measure of effect size appropriate for analysis of variance.See 5100 notes for interpretation of eta squared. Observed Power: Probability of a significant F if experiment were conducted again with population means equal to these sample means.Profile PlotsPost Hoc TestsJOB2947916671176Remember the interepretation of Homogeneous Subsets output . . .Means not in the same column are significantly different from each other.Remember the interepretation of Homogeneous Subsets output . . .Means not in the same column are significantly different from each other.Homogeneous SubsetsJSTukey BJOBNSubset12375.00177.86278.00Means for groups in homogeneous subsets are displayed. Based on observed means. The error term is Mean Square(Error) = 3.381.Having your cake and eating it too - Specifying Coding Schemes in GLMWhat if you just miss group coding variables. Is there a way to see them one last time in GLM?4282108296932Click on this button to work with group coding variables.00Click on this button to work with group coding variables.42291001773555Here are the SPSS names for the coding schemes we’re usingOur nameSPSS’s DummySimpleEffectsDeviation00Here are the SPSS names for the coding schemes we’re usingOur nameSPSS’s DummySimpleEffectsDeviation1600200-282575003219450-1132205I should have checked the homogeneity box here. Thanks, Stephanie.00I should have checked the homogeneity box here. Thanks, Stephanie.UNIANOVA JS BY Job /CONTRAST(Job)=Deviation /METHOD=SSTYPE(3)406146033655Checking the Parameter Estimates box tells GLM to print out any regression parameters it might have computed.These are regression parameters for any quantitative independent variables and for group-coding variables that are created automatically by GLM.00Checking the Parameter Estimates box tells GLM to print out any regression parameters it might have computed.These are regression parameters for any quantitative independent variables and for group-coding variables that are created automatically by GLM. /INTERCEPT=INCLUDE /PRINT=OPOWER ETASQ DESCRIPTIVE PARAMETER /CRITERIA=ALPHA(.05) /DESIGN=Job.Univariate Analysis of VarianceBetween-Subjects FactorsNJob172737Descriptive StatisticsDependent Variable:JSJobMeanStd. DeviationN17.861.676728.001.633735.002.1607Total6.952.24721Tests of Between-Subjects EffectsDependent Variable:JSSourceType III Sum of SquaresdfMean SquareFSig.Partial Eta SquaredNoncent. ParameterObserved PowerbCorrected Model40.095a220.0485.930.011.39711.859.815Intercept1015.04811015.048300.225.000.943300.2251.000Job40.095220.0485.930.011.39711.859.815Error60.857183.381Total1116.00021Corrected Total100.952204107976-102728These results are from the default dummy coding that SPSS always does automatically. Note – identical to those obtained when we used REGRESSION.00These results are from the default dummy coding that SPSS always does automatically. Note – identical to those obtained when we used REGRESSION.a. R Squared = .397 (Adjusted R Squared = .330)b. Computed using alpha = .05Parameter EstimatesDependent Variable:JSParameterBStd. ErrortSig.95% Confidence IntervalPartial Eta SquaredNoncent. ParameterObserved PoweraLower BoundUpper BoundIntercept5.000.6957.194.0003.5406.460.7427.1941.000[Job=1]2.857.9832.907.009.7924.922.3192.907.785[Job=2]3.000.9833.052.007.9355.065.3413.052.823[Job=3]0b........a. Computed using alpha = .05b. This parameter is set to zero because it is redundant.206502078105These are the results for the “deviation” group coding scheme we asked for.00These are the results for the “deviation” group coding scheme we asked for.Custom Hypothesis TestsContrast Results (K Matrix)Job Deviation ContrastaDependent VariableJSLevel 1 vs. MeanContrast Estimate.905Hypothesized Value0Difference (Estimate - Hypothesized)1100455-3810p-values are the same as those obtained using the REGRESSION procedure on p. 14.00p-values are the same as those obtained using the REGRESSION procedure on p. 14..905Std. Error5194301163320004718058699500.567Sig..12895% Confidence Interval for DifferenceLower Bound-.287Upper Bound2.097Level 2 vs. MeanContrast Estimate1.048Hypothesized Value0Difference (Estimate - Hypothesized)1.048Std. Error.567Sig..08195% Confidence Interval for DifferenceLower Bound-.145Upper Bound2.240a. Omitted category = 3What’s this???Test ResultsDependent Variable:JSSourceSum of SquaresdfMean SquareFSig.Partial Eta SquaredNoncent. ParameterObserved PoweraContrast40.095220.0485.930.011.39711.859.815Error60.857183.381a. Computed using alpha = .05 ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download