# Linear multiple regression: Fixed model, R2 deviation from zero

In multiple regression analyses the relation of a dependent variable Y to p independent factors X1, ..., Xm is studied. The present procedure refers to the so-called conditional or ﬁxed factors model of multiple regression (Gatsonis & Sampson, 1989; Sampson, 1974), that is, it is assumed that
Y = X β + ε
where X = (1X1 X2 · · · Xm) is a N × (m + 1) matrix of a constant term and ﬁxed and known predictor variables Xi. The elements of the column vector β of length m + 1 are the regression weights, and the column vector ε of length N contains error terms, with εiN(0, σ).

The procedure allows power analyses for the test that the proportion of variance of a dependent variable Y explained by a set of predictors B, that is, R2Y·B, is zero. The null and alternative hypotheses are:
H0 : R2Y·B = 0
H1 : R2Y·B > 0.
As will be shown in the examples section, the MRC procedure is quite ﬂexible and can be used as a substitute for some other tests.

## Effect size index

The general deﬁnition of the effect size index f2 used in this procedure is: f2 = Vs/Ve, where Vs is the proportion of variance explained by a set of predictors, and Ve is the residual or error variance (Vs + Ve= 1). In the special case considered here (Case 0 in Cohen, 1988, p. 407ff.) the proportion of variance explained is given by Vs = R2Y·B and the residual variance by Ve = 1 − R2Y·B. Thus:

f2 = R2Y·B/(1 - R2Y·B)
and

R2Y·B = f2(1 + f2).
Cohen (1988, p. 412) deﬁned the following conventional values for the effect size f2:
small f2 = 0.02
medium f2 = 0.15
large f2 = 0.35

Pressing the Determine button on the left side of the effect size label in the main window opens the effect size drawer that may be used to calculate f2 from other population parameters.

### Effect size from squared multiple correlation coefﬁcient

Choosing From correlation coefﬁcient allows to calculate the effect size f2 from the squared multiple correlation R2Y·B.

### Effect size from predictor correlations

By choosing the From predictor correlation matrix option one may compute ρ2 from the matrix of correlations among the predictor variables and the correlations between predictors and the dependent variable Y. Pressing the Insert/edit matrix button opens a window in which one can specify

1. the row vector u containing the correlations between each of the m predictors Xi and the dependent variable Y and
2. the m × m matrix B of correlations among the predictors (see below).

The squared multiple correlation coefﬁcient is then given by ρ2 = u B−1 u'. Each input correlation must lie in the interval [−1, 1]. The matrix B must be positive-deﬁnite. The resulting ρ2 must lie in the interval [0, 1]. Pressing the Calc ρ2 button tries to calculate ρ2 from the input and checks the positive-deﬁniteness of matrix B as well as the restrictions on ρ2.

## Options

This test has no options.

## Examples

### Basic Example

We assume that a dependend variable Y is predicted by as set B of 5 predictors. We further assume that the population R2Y·B is .10, that is that the 5 predictors account for 10% of the variance of Y. The sample size is N = 95 subjects. What is the power of the F test at α = 0.05?

First, by inserting R2 = 0.10 in the effect size dialog we calculate the corresponding effect size f2 = 0.1111111. We then use the following settings in G*Power to calculate the statistical power:

#### Select

Type of power analysis: Post hoc

#### Input

Effect size f2 : 0.1111111
α err prob: 0.05
Total sample size: 95
Number of predictors: 5

#### Output

Noncentrality parameter λ: 10.555555
Critical F : 2.316858
Numerator df: 5
Denominator df: 89
Power (1- β ): 0.673586
The output shows that the power of this test is about 0.67. This conﬁrms the value estimated by Cohen (1988, p. 424) in his example 9.1, which uses identical values.

### Example showing relations to a oneway ANOVA and the two-sample t-test

We assume the means 2, 3, 2, 5 for the k = 4 experimental groups in a one-factor design. The sample sizes in the group are 5, 6, 6, 5, respectively, and the common standard deviation is assumed to be σ = 2. Using the effect size drawer of the one-way ANOVA procedure we calculate from these values the effect size f = 0.5930904. With α = 0.05, 4 groups, and a total sample size of 22, a power of 0.536011 is computed.

An equivalent analysis could be done using the MRC procedure. To this end we set the effect size of the MRC procedure to f2 = 0.593090442 = 0.351756 and the number of predictors to (number of groups -1), that is, to k − 1 = 3 in our example. Now we choose the remaining parameters α and total sample size exactly as in the one-way ANOVA case. The resulting power value is identical to that arrived at using the one-way ANOVA procedure.

From the fact that the two-sided t-tests for the difference in means of two independent groups is a special case of the one-way ANOVA, it can be concluded that this test can also be regarded as a special case of the MRC procedure. The relation between the effect size d of the t-test and f2 is f2 = (d/2)2.

### Example showing the relation to two-sided tests of a point-biserial correlation

For power analyses of tests of whether a point biserial correlation r is different from zero, we recommend using the special procedure provided in G*Power. Nevertheless, a power analysis of this test in its two-sided variant can also be done using G*Power's MRC procedure. We simply need to set R2 = r2 and the Number of predictors = 1. Given the correlation r = 0.5 (r2 = 0.25) we get f2 = 0.25/(1 − 0.25) = 0.333. For α = 0.05 and total sample size N = 12 a power of 0.439627 is computed in both procedures.

## Related tests

ANOVA: Fixed effects, omnibus, one-way
Correlation: Point biserial model
Linear multiple regression: Fixed model, Linear multiple regression: Random model
Means: Difference between two independent means (two groups)

## Implementation notes

The H0 distribution is the central F distribution with numerator degrees of freedom df1 = p, and denominator degrees of freedom df2 = Np − 1, where N is the sample size and p the number of predictors in the set B explaining the proportion of variance given by R2Y·B. The H1 distribution is the noncentral F distribution with the same degrees of freedom and noncentrality parameter λ = f2 · N.

## Validation

The results were checked against the values produced by GPower 2.0 and those produced by PASS (Hintze, 2006). Slight deviations were found to the values tabulated in Cohen (1988). This is due to an approximation used by Cohen (1988) that underestimates the noncentrality parameter λ and therefore also the power. This issue is discussed more thoroughly in Erdfelder, Faul, and Buchner (1996).

## References

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

Erdfelder, E., Faul, F., & Buchner, A. (1996). GPOWER: A general power analysis program. Behavior Research Methods, Instruments, & Computers, 28, 1-11.

Hintze, J. (2006). NCSS, PASS, and GESS. Kaysville, Utah: NCSS.

Donnerstag, 12. 12. 2013