# Proportion (binomial test): Difference from constant (one sample case)

The problem considered in this case is whether the probability π of an event in a given population has the constant value π0 (null hypothesis). The null and the alternative hypotheses can be stated as:
H0 : π - π0 = 0;
H1 : π − π0 ≠ 0;
A two-tailed binomial test should be performed to test this undirected hypothesis. If it is possible to predict the direction of the deviation of a sample proportion p from π0 (e.g. p − π0 < 0), then a one-tailed binomial test should be chosen.

## Effect size index

The effect size g is defined as the deviation from the constant probability π0, that is, g = π − π0.

The definition of g implies the following restriction: ε ≤ (π0 + g) ≤ 1 − ε, with ε = 10−6. In an a priori analysis we need to respect the additional restriction that |g| > ε  (this is in accordance with the general rule that zero effect hypotheses are undefined in a priori analyses).

Pressing the Determine button on the left side of the effect size label opens the effect size drawer:

You can use this drawer to calculate the effect size g from π0 (called P1 in the dialog above) and π (called P2 in the dialog above) or from several relations between them. If you open the effect size dialog, the value of P1 is set to the value in the constant proportion input field in the main window.

There are four different ways to specify P2:
1. Direct input: Specify P2 in the corresponding input field below P1.
2. Difference: Select Difference P2-P1 and insert the difference into the text field on the left side (the difference is identical to g).
3. Ratio: Select Ratio P2/P1 and insert the ratio value into the text field on the left side.
4. Odds ratio: Select Odds ratio and insert the odds ratio (P2/(1 − P2))/(P1/(1 − P1)) into the text field on the left side.
The relational value given in the input field (and the selection) on the left side and the two proportions given in the two input fields on the right side may be synchronized by pressing the Sync values button.

Press the Calculate button to preview the Effect size g resulting from your input values. Press the Transfer to main window button
1. to calculate the effect size g = π − π0 = P2P1 and
2. to change, in the main window, the Constant proportion field to P1 and the Effect size g field to g as calculated.

## Options

The binomial distribution is discrete. It is thus not normally possible to arrive exactly at the nominal α-level. For two-sided tests this leads to the problem of how to "distribute" α to the two sides. G*Power offers the three options listed here, the first option being selected by default:
1. Assign α/2 to both sides: Both sides are handled independently in exactly the same way as in a one-sided test. The only difference is that α/2 is used instead of α. Of the three options offered by G*Power, this one leads to the largest deviation from the actual α (in post hoc analyses).
2. Assign to minor tail α/2, then rest to major tail (α2 = α/2, α1 = α − α2 ): First α/2 is applied to the side of the central distribution that is farther away from the noncentral distribution (minor tail). The criterion used for the other side is then α − α1, where α1 is the actual α found on the minor side. Since α1 ≤ α/2 one can conclude that (in post hoc analyses) the sum of the actual values α1 + α2 is in general closer to the nominal α-level than it would be if α/2 were assigned to both sides (see Option 1).
3. Assign α/2 to both sides, then increase to minimize the difference of α1 + α2 to α: The first step is exactly the same as in Option 1. Then, in the second step, the critical values on both sides of the distribution are increased (using the lower of the two potential incremental α-values) until the sum of both actual α values is as close as possible to the nominal α.
Press the Options button in the main window to select one of these options.

## Examples

We assume a constant proportion π0 = 0.65 in the population and an effect size g = 0.15, that is, π = 0.65 + 0.15 = 0.8. We want to know the power of a one-sided test given α = .05 and a total sample size of N = 20.

### Select

Type of power analysis: Post hoc

### Options

Alpha balancing in two-sided tests: Assign α/2 on both sides.

### Input

Tail(s): One
Effect size g: 0.15
α err prob: 0.05
Total sample size: 20
Constant proportion: 0.65

### Output

Lower critical N: 17
Upper critical N: 17
Power (1-β err prob): 0.411449
Actual α: 0.044376
The results show that we should reject the null hypothesis of π = 0.65 if the relevant event is observed in 17 out of the 20 possible cases. Using this criterion, the actual α is 0.044, that is, it slightly lower than the requested α of 0.05. The power is 0.41. The graph displayed in the main window shows the distribution plots for the example.

The red and blue curves show the binomial distribution under H0 and H1, respectively. The vertical line is positioned at the critical value N = 17. The horizontal portions of the graph should be interpreted as the top of a bar ranging from N − 0.5 to N + 0.5 around an integer N, where the
height of the bar corresponds to p(N).

We now use the Power Plot window to plot power values for a range of sample sizes. Press the X-Y plot for a range of values button at the bottom of the main window to open the Power Plot window. We select to plot the power as a function of total the sample size. We choose a range of samples sizes from 10 in steps of 1 through to 50. Next, we select to plot just 1 graph with α = 0.05 and effect size g = 0.15. Pressing the Draw plot button produces the plot shown below.

It can be seen that the power does not increase monotonically but in a zig-zag fashion. This behavior is due to the discrete nature of the binomial distribution that prevents that arbitrary α values can be realized. Thus, the curve must not be interpreted to show that the power for a fixed α sometimes decreases with increasing sample size. The real reason for the non-monotonic behaviour is that the actual α level that can be realized deviates more or less from the nominal α level for different sample sizes.

This non-monotonic behavior of the power curve poses a problem if we want to determine, in an a priori analysis, the minimal sample size needed to achieve a certain power. In these cases G*Power always tries to find the lowest sample size for which the power is not less than the specified value. In the case depicted in the graph shown above, for instance, G*Power would choose N = 16 as the result of a search for the sample size that leads to a power of at least 0.3. All types of power analyses except post hoc are confronted with similar problems. To ensure that the intended result has been found, we recommend to check the results from these types of power analysis in a power vs. sample size plot.

## Related tests

Proportions: Sign test (binomial test).

## Implementation notes

The H0 distribution is the Binomial distribution B(N, π0 ). The H1 distribution is the Binomial distribution B(N, g + π0). N denotes the total sample size, π0 the constant proportion assumed in the null hypothesis, and g the effect size index as defined above.

## Validation

The results of G*Power for the special case of the sign test, that is π0 = 0.5, were checked against the tabulated values given in Cohen (1969, Chapter 5). Cohen always chose from the realizable α values the one that is closest to the nominal value even if it is larger then the nominal value. G*Power, in contrast, always requires the actual α to be lower then the nominal value. In cases where the α value chosen by Cohen happens to be lower then the nominal α, the results computed with G*Power were very similar to the tabulated values. In the other cases, the power values computed by G*Power were lower then the tabulated values.

In the general case (π0 ≠ 0.5) the results of post hoc analyses for a number of parameters were checked against the results produces by PASS (Hintze, 2006). No differences were found in one-sided tests. The results for two-sided test were also identical if the alpha balancing method Assign α/2 on both sides was chosen in G*Power.

## References

Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York, NY: Academic Press.

Hintze, J. (2006). NCSS, PASS, and GESS. Kaysville, Utah: NCSS.

Sonntag, 19. 05. 2013