Types of Error in Medical Statistics

Medical research is deeply rooted in the statistical methods for finding significance of the findings. This method is not immune to errors, such as type 1 and type 2 errors. In fact a study needs to be carefully designed in order to reduce the chances of these two errors.

Null hypothesis

The null hypothesis is that the observed phenomenon is by chance and is not significant. We test the probability of the null hypothesis being true and if the probability is sufficiently low, we can consider our findings to be statistically significant.

For example, if we are comparing a drug against placebo, then our null hypothesis will be that the drug is equivalent to placebo. We denote this as H0. On the other hand we can have an alternative hypothesis that the drug is better than placebo, denoted as H1. We can have more than one alternative hypothesis as well.

H0Null hypothesisThe drug and placebo are equivalent
H1Alternate hypothesisThe drug is better than placebo

Types of error

While testing for the null hypothesis, we can never be 100% certain that our conclusion is the right one. Although small, there is always some chance that our test result is wrong.

There can be two kinds of error in such cases —

  • The null hypothesis is true, but we rejected it (Type 1 error)
  • The null hypothesis is wrong, but we accepted it (Type 2 error)

So the type 1 error may be thought of as a false positive whereas the type 2 error can be considered a false negative.

H0 is trueH0 is false
H0 is acceptedCorrect conclusionType 2 error
H0 is rejectedType 1 errorCorrect conclusion
  • The chance of type 1 error is denoted by α
  • The chance of type 2 error is denoted by β

Our aim while designing a study is to reduce the α and the β as much as possible.

Acceptable risk

Although we attempt to reduce the α and the β as much as possible, we can never make them zero. In fact a reduction of one is often associated with a increase of the other. The only way to reduce both of them simultaneously is to increase the sample size.

The commonly accepted values for α and β are as follows —

α0.055% chance of false positive
β0.220% chance of false negative

Significance of a study

A study can be considered significant if we can be fairly certain that the findings are meaningful and not due to pure chance.

The α value determines the level of significance. Statistical testing of hypothesis produces p-value which is the probability of the null hypothesis being true. To call a finding significant the p-value must be greater than α. In other words the chance of null hypothesis being true must be less than the predetermined value of α.

Power of a study

The power of a study is its ability to detect a significant result.

Commonly 80% is considered to be the minimum power of a study while designing it. This translates to a β value of 0.2.

A power of 80% means that the study has a 80% chance of reliably detecting a significant result. It also means that if the study fails to find a significant result, there is a 20% chance that it is a false negative.

Sample size calculation

While designing a study it's crucial to determine the minimum sample size required to achieve the desired level of power and significance. If the sample size is kept fixed then increasing power decreases significance and vice versa. In order to increase both, the sample size has to be increased.

There are many online and offline sample size calculators. I personally use R for these calculations.

Links

Comments

Copyright © 2012-2024 Dr. Agnibho Mondal
E-mail: mondal@agnibho.com