Published on January 18, 2021 by Pritha Bhandari. Revised on November 11, 2022. In statistics, a Type I error is a false positive conclusion, while a Type II
error is a false negative conclusion. Making a statistical decision always involves uncertainties, so the risks of making these errors are unavoidable in hypothesis testing. The probability of making a Type I error is the
significance level, or alpha (α), while the probability of making a Type II error is beta (β). These risks can be minimized through careful planning in your study design. Using hypothesis testing, you can make decisions about whether your data support or refute your research
predictions with null and alternative hypotheses. Hypothesis testing starts with the assumption of no difference between groups or no relationship between variables in the population—this is the null hypothesis. It’s always paired with
an alternative hypothesis, which is your research prediction of an actual difference between groups or a true relationship between variables. In this case: Then, you decide whether the null hypothesis can be
rejected based on your data and the results of a statistical test. Since these decisions are based on probabilities, there is always a risk of making the wrong conclusion. A Type II error happens when you get false negative results: you conclude that the drug intervention didn’t improve symptoms when it actually did. Your study may have missed key indicators of improvements or attributed any improvements to other factors instead. Type I errorA Type I error means rejecting the null hypothesis when it’s actually true. It means concluding that results are statistically significant when, in reality, they came about purely by chance or because of unrelated factors. The risk of committing this error is the significance level (alpha or α) you choose. That’s a value that you set at the beginning of your study to assess the statistical probability of obtaining your results (p value). The significance level is usually set at 0.05 or 5%. This means that your results only have a 5% chance of occurring, or less, if the null hypothesis is actually true. If the p value of your test is lower than the significance level, it means your results are statistically significant and consistent with the alternative hypothesis. If your p value is higher than the significance level, then your results are considered statistically non-significant. Example: Statistical significance and Type I errorIn your clinical study, you compare the symptoms of patients who received the new drug intervention or a control treatment. Using a t test, you obtain a p value of .035. This p value is lower than your alpha of .05, so you consider your results statistically significant and reject the null hypothesis.However, the p value means that there is a 3.5% chance of your results occurring if the null hypothesis is true. Therefore, there is still a risk of making a Type I error. To reduce the Type I error probability, you can simply set a lower significance level. Type I error rateThe null hypothesis distribution curve below shows the probabilities of obtaining all possible results if the study were repeated with new samples and the null hypothesis were true in the population. At the tail end, the shaded area represents alpha. It’s also called a critical region in statistics. If your results fall in the critical region of this curve, they are considered statistically significant and the null hypothesis is rejected. However, this is a false positive conclusion, because the null hypothesis is actually true in this case! Type II errorA Type II error means not rejecting the null hypothesis when it’s actually false. This is not quite the same as “accepting” the null hypothesis, because hypothesis testing can only tell you whether to reject the null hypothesis. Instead, a Type II error means failing to conclude there was an effect when there actually was. In reality, your study may not have had enough statistical power to detect an effect of a certain size. Power is the extent to which a test can correctly detect a real effect when there is one. A power level of 80% or higher is usually considered acceptable. The risk of a Type II error is inversely related to the statistical power of a study. The higher the statistical power, the lower the probability of making a Type II error. However, a Type II may occur if an effect that’s smaller than this size. A smaller effect size is unlikely to be detected in your study due to inadequate statistical power. Statistical power is determined by:
To (indirectly) reduce the risk of a Type II error, you can increase the sample size or the significance level. Type II error rateThe alternative hypothesis distribution curve below shows the probabilities of obtaining all possible results if the study were repeated with new samples and the alternative hypothesis were true in the population. The Type II error rate is beta (β), represented by the shaded area on the left side. The remaining area under the curve represents statistical power, which is 1 – β. Increasing the statistical power of your test directly decreases the risk of making a Type II error. Trade-off between Type I and Type II errorsThe Type I and Type II error rates influence each other. That’s because the significance level (the Type I error rate) affects statistical power, which is inversely related to the Type II error rate. This means there’s an important tradeoff between Type I and Type II errors:
This trade-off is visualized in the graph below. It shows two curves:
Type I and Type II errors occur where these two distributions overlap. The blue shaded area represents alpha, the Type I error rate, and the green shaded area represents beta, the Type II error rate. By setting the Type I error rate, you indirectly influence the size of the Type II error rate as well. It’s important to strike a balance between the risks of making Type I and Type II errors. Reducing the alpha always comes at the cost of increasing beta, and vice versa. Is a Type I or Type II error worse?For statisticians, a Type I error is usually worse. In practical terms, however, either type of error could be worse depending on your research context. A Type I error means mistakenly going against the main statistical assumption of a null hypothesis. This may lead to new policies, practices or treatments that are inadequate or a waste of resources. Example: Consequences of a Type I errorBased on the incorrect conclusion that the new drug intervention is effective, over a million patients are prescribed the medication, despite risks of severe side effects and inadequate research on the outcomes. The consequences of this Type I error also mean that other treatment options are rejected in favor of this intervention.In contrast, a Type II error means failing to reject a null hypothesis. It may only result in missed opportunities to innovate, but these can also have important practical consequences. Example: Consequences of a Type II errorIf a Type II error is made, the drug intervention is considered ineffective when it can actually improve symptoms of the disease. This means that a medication with important clinical significance doesn’t reach a large number of patients who could tangibly benefit from it.Frequently asked questions about Type I and II errorsHow do you reduce the risk of making a Type I error? The risk of making a Type I error is the significance level (or alpha) that you choose. That’s a value that you set at the beginning of your study to assess the statistical probability of obtaining your results (p value). The significance level is usually set at 0.05 or 5%. This means that your results only have a 5% chance of occurring, or less, if the null hypothesis is actually true. To reduce the Type I error probability, you can set a lower significance level. What is statistical significance? Statistical significance is a term used by researchers to state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test. Significance is usually denoted by a p-value, or probability value. Statistical significance is arbitrary – it depends on the threshold, or alpha value, chosen by the researcher. The most common threshold is p < 0.05, which means that the data is likely to occur less than 5% of the time under the null hypothesis. When the p-value falls below the chosen alpha value, then we say the result of the test is statistically significant. What is statistical power? In statistics, power refers to the likelihood of a hypothesis test detecting a true effect if there is one. A statistically powerful test is more likely to reject a false negative (a Type II error). If you don’t ensure enough power in your study, you may not be able to detect a statistically significant result even when it has practical significance. Your study might not have the ability to answer your research question. Cite this Scribbr articleIf you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Is this article helpful?You have already voted. Thanks :-) Your vote is saved :-) Processing your vote... What is meant by a type I error?A Type I error means rejecting the null hypothesis when it's actually true. It means concluding that results are statistically significant when, in reality, they came about purely by chance or because of unrelated factors. The risk of committing this error is the significance level (alpha or α) you choose.
Which type of error occurs when the null hypothesis is rejected and it is found to be true?A type I error occurs if a null hypothesis is rejected that is actually true in the population. This type of error is representative of a false positive. Alternatively, a type II error occurs if a null hypothesis is not rejected that is actually false in the population.
What type of error is failing to reject the null hypothesis?If we reject the null hypothesis when it is true, then we made a type I error. If the null hypothesis is false and we failed to reject it, we made another error called a Type II error.
What is Type 1 error in research?A type I error occurs when in research when we reject the null hypothesis and erroneously state that the study found significant differences when there indeed was no difference. In other words, it is equivalent to saying that the groups or variables differ when, in fact, they do not or having false positives.
|