Correlations: Two independent Pearson r'sThis procedure refers to tests of hypotheses concerning differences between two independent population correlation coefﬁcients. The null hypothesis states that both correlation coefﬁcients are identical, that is, ρ1 = ρ2. The (two-sided) alternative hypothesis is that the correlation coefﬁcients are different: ρ1 ≠ ρ2:
H0 : ρ1 − ρ2 = 0If the direction of the deviation ρ1 − ρ2 cannot be predicted a priori, a two-sided ('two-tailed') test should be used. Otherwise a one-sided test is appropriate.
H1 : ρ1 − ρ2 ≠ 0.
Effect size indexThe effect size index q is deﬁned as a difference between two Fisher-z-transformed correlation coefﬁcients: q = z1 − z2 , with z1 = ln((1 + r1 )/(1 − r1 ))/2 and z2 = ln((1 + r2 )/(1 − r2 ))/2. G*Power requires q to be within the interval [−10, 10]. Cohen (1969, p. 109ff) deﬁnes the following effect size conventions for q:
small q = 0.1Pressing the Determine button on the left side of the effect size label opens the effect size dialog. It can be used to calculate q from two correlations coefﬁcients.
medium q = 0.3
large q = 0.5
Given q and ρ2, we find that ρ1 = (a −1)/(a + 1), with a = exp[2q + ln((1 + ρ2 )/(1 − ρ2))].
G*Power computes critical values for the zc distribution. To transform these values to critical values qc related to the effect size measure q, you should use the formula: qc = zc √((N1+ N2 − 6)/((N1 − 3)(N2 − 3))) (see Cohen, 1969, p. 135).
OptionsThis test has no options.
ExamplesAssume that we know the correlation between test A and a criterion to be ρ1 = 0.75. We want to test whether an alternative test B shows a higher correlation, say at least ρ2 = 0.88. We have two data sets, one using test A with N1 = 51, and a second data set using test B with N2 = 206. Given α = 0.05, what is the power of a two-sided test of a difference between these correlations?
We use the effect size drawer to calculate the effect size q from the two population correlations. Setting Correlation coefficient ρ1 = 0.75 and Correlation coefficient ρ2 = 0.88 yields q = −0.4028126 which is transferred to the main window using the Calculate and transfer to main window button. Now we may perform our power analysis.
Type of power analysis: Post hoc
Effect size q: -0.4028126
α err prob: 0.05
Sample size: 260
Sample size: 51
Critical z : -1,9599640
Power (1- β ): 0.726352
The output shows that the power of the test of whether two independent correlations are different given the above assumptions is about 0.726. This is very close to the power value of 0.72 given in Example 4.3 in Cohen (1988, p. 131) based on input values that are identical to the ones used here. The small deviations are due to rounding errors in Cohen's analysis.
If we instead assumed N1 = N2, how many subjects would we then need to achieve the same power? To answer this question we use an a priori power analysis with Power (1- β err prob) = 0.726352 (as calculated above) as input and an Allocation ratio N2/N1 = 1 to enforce equal group sizes.
Let us keep all other parameters identical to those just used. The result is that we now need only 84 cases in each group. Thus choosing equal sample sizes reduces the overall sample size considerably, from 260 + 51 = 311 to 84 + 84 = 168.
Implementation notesThe H0 distribution is the standard normal distribution N(0, 1). The H1 distribution is the normal distribution N(q/s, 1), where q denotes the effect size as deﬁned above, s = √(1/(N1 − 3) + 1/(N2 − 3)), and N1 and N2 represent the sample sizes in each of the two groups.
ValidationThe results were checked against the table in Cohen (1969, Chap. 4).
ReferenesCohen, J. (1969). Statistical power analysis for the behavioral sciences (1st ed.). New York: Academic Press.
Samstag, 18. 05. 2013
Questions about this website? Contact
Letzte Änderung: 06.12.2009, 20:48