Mason.gmu.edu



Project 3 - CFA - MTMMDue date: Qui sais?This project will involve the proj1cor.dat and proj1sd.dat files. There are 15 variables in this file, 300 cases, and no missing data. These data were contrived by me and are, in fact, the data that I used to demonstrate effect size computations in my green Sage book. The variables are ‘IC1" for ‘Individual Characteristic 1', ‘Jobperf’, ’goaldif’ for ‘goal difficulty’, ‘gender’, ‘yesno’, ‘goalcom’ for ‘goal commitment’, and ‘var11' - ‘var33'. I purposely gave some of these variables nebulous titles so that we can use this data set for different projects with different “variables”. Project 3 will involve var11 - var33. These nine observed variables are meant to represent measures of each of three traits from each of three exercises (e.g. assessment center data). The numbers in these variable labels are meant to indicate the latent variable of which the observed variable is an example. The exercises are group discussion, competitive task, and role play. The traits are problem solving, interpersonal skill, and initiative. This set up is similar to that in Schneider & Schmitt, JAP, 1992, p.32. var11, 12, and 13 are group discussion exercises for problem solving, interpersonal skill, and initiative respectively, var21, 22, and 23 are competitive task exercises for the same traits, etc. The LISREL analysis will involve testing a multitrait-multimethod sort of model with six latent variables corresponding to the three methods and the three traits. First, conduct a confirmatory factor analysis in which each observed variable is caused only by its respective trait (Model 1). In other words, no method effects. Path Model: Model 1Now, conduct a confirmatory factor analysis in which each observed variable is caused by its respective method and trait. Also, although the traits can correlate with one another and the methods can correlate with one another, the traits should not be allowed to correlate with the methods. Be sure to ask for completely standardized solutions. Path Model: Model 2Does this second model (Model 2) improve upon the first? Perhaps more importantly, do all of the results make sense (the answer is no)? Model 1Model 2Min. Fit Function Chi Square108.3424.67BetterRMSEA.11.061BetterNormed Fit Index.84.96BetterComparative Fit Index.87.98BetterGoodness of Fit Index.93.98BetterOverall, the fit indices tell us that the Model 2 fits the data better than Model 1. This is expected because we are adding a large number of paths that were omitted in Model 1. However, the results of Model 2 do not make sense because one of the theta-delta values is negative: VAR12 = -.76. Specifically, error variance cannot be negative because it is impossible to explain more than 100% of the variance in a certain variable. Additionally, one of the lambda values is negative: VAR31 loading with RolePlaying = -.22. In order to deal with the problem that you have just identified, try constraining the rogue value to be equal to the value of a conceptually similar path. Didn’t work? Try constraining it to equal the error variance for VAR31 (Model 3). Path Model: Model 3 Did this solve your problem? NO: In fact, the theta-delta value for VAR31 is also negative now (= -.76). Are we having fun yet? (at least the models are running!!! ) Now try freeing up some of the Trait-Method correlations. Keep fiddling until have no negative thetas and no negative lambdas. Once you get tired of this, go back to the original Model 2 and free up M1-T3, M2-T3, and M3-T2 (Model 4). Whew.Path Model: Model 4Now go back to the modification indices for Model 2. What were the signs that this last fix might have worked (apart from phi)? Model 2: Modification IndicesThe biggest problem with Model 2 is the negative error variance values (theta-delta). This suggests that the model is mis-specified. In terms of the modification indices, all of the errors of the observed variables in the Group discussion method are correlating with other observed variables and there are some observed variable error terms that indicate there is a relationship between the method and trait. This suggests that there is something that we are not taking into account that is impacting those observed variables. Therefore, if we free up some theoretically valid correlations between methods and traits and fish around, we may be able to find a solution—in which we did. Back to Model 4.Are the path coefficients as you would have hoped? What about the standard fit statistics?Overall, the path coefficients that emerged were pretty strong and are as I would have hoped. On all of the latent variables the coefficients from the different observed variables were approximately .70 or above. However, the paths on two latent variables could have been stronger (Competitive Task and Interrelationship Skills). For example, I would have hoped for the coefficients from the Competitive task trait latent variable to be stronger, where the coefficients from the observed variables were .44, 1.15, and .49. In terms of the standard fit statistics, they turned out very well. The chi square (4.86), although significant, became smaller than the second model and the RMSEA (< .01), NFI (.99), CFI (1.00) and GFI (1.00) indicate that the hypothesized model fits the data very well. The standard fit statistics are nearly perfect and as good as one would hope for. Path Coefficients/Factor Loadings:Test a modified version of Model 2 in which the correlations among the three traits are fixed to 1 (Model 5). How is this model related to Model 2? Conduct a test of the difference in fit between these two models. What does this test suggest?Inserted Syntax: set the correlation between Group and Compete to 1set the correlation between Group and Role to 1set the correlation between Compete and Role to 1Model 5: Path ModelHow is this model related to Model 2? The purpose of Model 2 was to test how well the data fit the hypothesized model when each observed variable was caused by both the trait and method. For theoretical reasons, both models did not allow the traits and methods to correlate. However, Model 5 includes more specifications than Model 2 when we fix the correlations between the three traits to 1. Therefore, we are testing nested models. Model 5 now tests how well the data fits the model when the three traits are hypothesized to be representative of one construct. Conduct a test of the difference in fit between these two models. Model 2Model 5DifferenceChi Square 25.4544.2018.75**Df10155What does this test suggest?The chi square from the test of the difference in fit between Model 2 and Model 5 suggest that these models are significantly different. In other words, by setting the correlations between the traits equal to 1 in Model 5 makes it a statistically significantly different model from Model 2. Additionally, by observing the chi squares, the chi square for Model 5 increases by 18.75 points, which suggests that the degree of correspondence between the observed and reproduced correlations are stronger in Model 2 than in Model 5. Ultimately, this shows that the traits (problem solving, interpersonal skill, and initiative) are not representative of one construct. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download