PDS, Corr/Regr



Correlation/Regression Analysis Using Personal Data Set? Using your Personal Data Set (the one with two dichotomous variables and two continuous variables), do the following:*Compute the Pearson r between two of the dichotomous variables. This r will be a phi coefficient. Square the phi coefficient and the multiply it by the total number of subjects on which the phi was computed. This will give you the Chi-square statistic that tests the null hypothesis that the value of phi in the population is zero.? Compare your computed value of Chi-square with that obtained when you computed Chi-square in the usual fashion for a contingency table analysis (as we did when we were covering the chapter on Chi-square).Compute the Pearson r between one of your dichotomous variables and one of your continuous variables. This r is a point biserial correlation coefficient. Using the usual formula for computing the t to test the null hypothesis that?rho is zero, test the null hypothesis that the point biserial correlation coefficient has a value of zero in the population. Compare your computed value of t with that obtained when you tested the relationship between the dichotomous variable and the continuous variable in the traditional way, with a pooled variances independent samples t-test, like we did when we were covering Chapter 7.Conduct a correlation/regression analysis predicting one of your continuous variables from another continuous variable. Start by evaluating the assumptions of the tests of significance (normality and homoscedasticity).? Obtain the correlation coefficient, regression line, test of significance, and a 95% confidence interval for rho.? Obtain a scatter plot with the regression line superimposed on the scatter plot. Be prepared to write an APA-style report of the results should I call on you to do so.In the CorrRegrPDS forum in Canvas, write a terse report on the results of your three analyses.? For example,Female college students were significantly more likely to report having back problems than were male college students, phi = .276, Chi-square(1, N = 100) = 7.605, p = .006.Men were significantly taller than were women, rpb = .806, t(153) = 16.836), p < .001.Faculty salaries were significantly correlated with the number of years since they earned their Ph.D., r(N = 397) = .419, p < .001, 95% CI [.28, .54].Later you shall use these data to conduct an analysis of covariance, a factorial analysis of variance, and possibly some nonparametric or resampling statistics.*Expect some rounding error in the value of the chi-square you get from the phi coefficient. Here is an example:The variables were studio (Disney or Warner Brothers) and whether or not the studio had international earning above the median. Crosstabs returned a chi-square value of 3.360.ValuedfAsymptotic Significance (2-sided)Pearson Chi-Square3.360a1.067CorrelationsStudio (Disney=1 WB=2)InternationalMedianInternational GrossStudio (Disney=1 WB=2)Pearson Correlation1-.237-.395**Sig. (2-tailed).069.002N606060The phi coefficient is .237. Computed from phi, chi-square = N2 = 60(.237)2 = 3.370. WTF, Crosstabs reported a chi-square of 3.360. The difference is due to rounding error. The phi coefficient is .237 to three decimal points. Let’s get more precision. If you double click the coefficient in the table in SPSS, you see this:Now, chi-square = 60(.236643)2 = 3.35999, rounded to 3.360.Now on to a more important issue. Psychologists have developed the bad habit of categorizing perfectly good continuous variables prior to analysis, apparently unaware that this can distort the results and lower power. “InternationalGross” here is a continuous variable that was dichotomized, by a median split, into InternationalMedian.” The phi coefficient here fell short of significance, leading to the conclusion that the studios did not differ in gross earnings. So, what happens if we do not degrade the international gross earnings variable. As shown above, the point biserial coefficient is significant, rpb = .395, p < .01. More conventionally, but equivalently, an independent samples t test also results in a significant difference, tdfSig. (2-tailed)International GrossEqual variances assumed3.27458.002Independent Samples Effect SizesStandardizeraPoint Estimate95% Confidence IntervalLowerUpperInternational GrossCohen's d347604207.189.857.3181.390Dichotomization has turned a large (d = .86), significant difference into a not significant difference.Please read this document. If you have been keeping up with your readings, you probably have already had a look at, in Canvas/Modules, Howell Chapter 9, Bivariate Correlation and Regression, MacCallum et al. -- Dichot NOT!.This page most recently revised on 22-October-2020.? ? Copyright 2020, Karl L. Wuensch - All rights reserved. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download

To fulfill the demand for quickly locating and searching documents.

It is intelligent file search solution for home and business.

Literature Lottery