Sample size calculations involve the following entities:
z: z value (obtained from a table–> 95% confidence interval=> z=1.96)
Alpha (a): level of significance
ME: margin of error
In this post we will look at alpha and beta.
For brevity, the following conventions will be used:
a : alpha
b : beta
What is a?
You may recall that I discussed Type I and Type II errors in a previous post. a is the probability of making a Type I error.
What about b?
b is the probability of making a Type II error.
a is also the level at which we would reject the null hypothesis. Therefore, a is also called the level of significance.
Typical values of a are: 5% (0.05), 2.5%(0.025), 1%(0.01). When we set a at the 5% level, we are essentially saying that the probability of making a Type I error will be 5%. Conversely, (1-a) corresponds to the 95% Confidence Interval– we are 95% confident that we have captured the truth. Values of a above 5% are considered unacceptably high by convention.
b is the probability of making a Type II error (wrongly rejecting the truth). Conversely, (1-b) gives us the probability of detecting the truth. This is called the power of the study- the ability to detect a difference where it truly exists.
Typical values of (1-b) are: 80% (0.8), 90% (0.9), 95% (0.95), 99% (0.99), 99.9% (0.999). The higher the power, the larger the sample size of the study. Higher power, although desirable, may not feasible at times. Less power may render the study useless by failing to detect an existing difference. Power less than 80% is considered unacceptably low by convention.
The values of a and b should be set/ defined in advance (a priori).
If a study fails to demonstrate any statistically significant findings, a ‘post-hoc‘ (after the effect/ study) power analysis should be performed to determine the actual power of the study.