Statistics

Introduction to Hypothesis Testing

Last updated: March 2026 · Advanced
Before you start

You should be comfortable with:

Real-world applications
💊
Nursing

Medication dosages, IV drip rates, vital monitoring

Hypothesis testing is how we use data to make decisions. Is a new drug more effective than the current treatment? Is a factory’s defect rate higher than the acceptable threshold? Is there a real difference between two teaching methods, or is the observed difference just due to random chance? These questions all follow the same logical framework — and that framework is hypothesis testing. In this lesson, you will learn the core concepts: null and alternative hypotheses, test statistics, p-values, significance levels, and the types of errors that can occur.

The Logic of Hypothesis Testing

The reasoning behind a hypothesis test follows a structured sequence:

  1. Start with a claim — assume some default position (the null hypothesis) is true
  2. Collect data — draw a random sample and compute a relevant statistic
  3. Ask a key question — if the null hypothesis were true, how surprising is the data we observed?
  4. Make a decision — if the data would be very surprising under the null hypothesis, reject it. If the data is consistent with the null, fail to reject it.

A helpful analogy is the legal standard of “innocent until proven guilty.” In a trial, the default assumption (null hypothesis) is that the defendant is innocent. The prosecution must present evidence strong enough to reject that assumption beyond a reasonable doubt. If the evidence is not strong enough, the defendant is found “not guilty” — which is not the same as saying “innocent.” Similarly, failing to reject the null hypothesis does not prove it is true; it only means the data did not provide strong enough evidence against it.

Null and Alternative Hypotheses

Every hypothesis test begins with two competing statements about the population:

  • H0H_0 (the null hypothesis): the default claim — typically “no effect,” “no difference,” or “the parameter equals a specific value”
  • HaH_a (the alternative hypothesis): what you suspect or want to demonstrate — typically “there is an effect,” “there is a difference,” or “the parameter differs from the null value”

The null hypothesis always contains an equals sign. The alternative hypothesis specifies the direction of the difference (or lack thereof).

Examples of Hypothesis Pairs

Drug effectiveness:

  • H0:p=0.50H_0: p = 0.50 (the drug is no better than a placebo — 50% recovery rate)
  • Ha:p>0.50H_a: p > 0.50 (the drug produces a higher recovery rate)

Manufacturing quality:

  • H0:μ=500H_0: \mu = 500 g (the machine fills packages to the target weight)
  • Ha:μ500H_a: \mu \neq 500 g (the machine is off target — too high or too low)

Comparing two groups:

  • H0:μ1μ2=0H_0: \mu_1 - \mu_2 = 0 (no difference between the two groups)
  • Ha:μ1μ20H_a: \mu_1 - \mu_2 \neq 0 (the groups differ)

One-Sided vs Two-Sided Tests

  • A one-sided (one-tailed) test has an alternative hypothesis with a direction: Ha:p>0.50H_a: p > 0.50 or Ha:μ<500H_a: \mu < 500. Use this when you only care about a deviation in one direction.
  • A two-sided (two-tailed) test has an alternative hypothesis without a specific direction: Ha:p0.50H_a: p \neq 0.50 or Ha:μ500H_a: \mu \neq 500. Use this when a deviation in either direction would be important.

When in doubt, use a two-sided test — it is more conservative and does not assume the direction of the effect in advance.

Test Statistics

A test statistic is a single number that measures how far the sample result is from what the null hypothesis predicts. The general form is:

test statistic=sample statisticnull valuestandard error\text{test statistic} = \frac{\text{sample statistic} - \text{null value}}{\text{standard error}}

This is essentially a z-score or t-score: it tells you how many standard errors the observed result is from the hypothesized value. A test statistic of 0 means the data matches the null hypothesis perfectly. A large positive or negative test statistic means the data is far from what the null hypothesis would predict.

For a proportion: z=p^p0p0(1p0)nz = \frac{\hat{p} - p_0}{\sqrt{\frac{p_0(1-p_0)}{n}}}

Note that in hypothesis testing for proportions, the standard error uses the null value p0p_0 (not the sample proportion p^\hat{p}) because we are asking: “If p0p_0 were the true proportion, how unlikely is our sample?”

For a mean (when σ\sigma is unknown): t=xˉμ0s/nt = \frac{\bar{x} - \mu_0}{s / \sqrt{n}}

P-Values — What They Mean

The p-value is the probability of observing a result as extreme as (or more extreme than) the sample data, assuming the null hypothesis is true. It answers the question: “If H0H_0 is true, how likely are we to see data this extreme just by random chance?”

  • A small p-value (e.g., 0.003) means the observed data would be very unlikely under H0H_0. This is strong evidence against the null hypothesis.
  • A large p-value (e.g., 0.42) means the observed data is consistent with H0H_0. There is no strong evidence against the null hypothesis.

What the P-Value is NOT

The p-value is one of the most misinterpreted concepts in statistics. Here are common errors:

  • The p-value is NOT the probability that H0H_0 is true. It is the probability of the data (or more extreme data) given that H0H_0 is true — a conditional probability, not a direct statement about H0H_0.
  • The p-value is NOT the probability that the result is due to chance. It is calculated under the assumption that chance alone is operating.
  • A large p-value does NOT prove H0H_0 is true. It only means the data is consistent with H0H_0 — the data might also be consistent with other hypotheses.

Example 1: Testing a Coin for Fairness

You flip a coin 100 times and get 62 heads. Is the coin fair?

Step 1: State the hypotheses.

H0:p=0.50(the coin is fair)H_0: p = 0.50 \quad \text{(the coin is fair)}

Ha:p0.50(the coin is not fair — two-sided test)H_a: p \neq 0.50 \quad \text{(the coin is not fair — two-sided test)}

Step 2: Calculate the test statistic. Under H0H_0, the standard error uses p0=0.50p_0 = 0.50:

SE=0.50×0.50100=0.25100=0.0025=0.05SE = \sqrt{\frac{0.50 \times 0.50}{100}} = \sqrt{\frac{0.25}{100}} = \sqrt{0.0025} = 0.05

z=0.620.500.05=0.120.05=2.4z = \frac{0.62 - 0.50}{0.05} = \frac{0.12}{0.05} = 2.4

Step 3: Find the p-value. Since this is a two-sided test:

p-value=2×P(Z>2.4)=2×0.0082=0.0164\text{p-value} = 2 \times P(Z > 2.4) = 2 \times 0.0082 = 0.0164

Interpretation: If the coin were truly fair, there is only a 1.64% chance of getting a result as extreme as 62 (or more) heads out of 100 flips. This is fairly strong evidence against the coin being fair.

Significance Level (α\alpha)

The significance level α\alpha is the threshold you set before collecting data that determines how small the p-value must be to reject H0H_0. It represents the maximum probability of making a Type I error (rejecting a true null hypothesis) that you are willing to tolerate.

Common significance levels:

α\alphaMeaning
0.1010% risk of false positive — used in exploratory studies
0.055% risk — the most common standard in research
0.011% risk — used when consequences of a false positive are severe

Decision rule:

  • If p-value α\leq \alpha: reject H0H_0. The result is statistically significant.
  • If p-value >α> \alpha: fail to reject H0H_0. The result is not statistically significant.

Example 1 continued: With α=0.05\alpha = 0.05, the p-value of 0.0164 is less than 0.05. We reject H0H_0 and conclude there is statistically significant evidence that the coin is not fair.

Note the careful language: we say “fail to reject H0H_0,” not “accept H0H_0.” Absence of evidence against the null is not evidence that the null is true.

Type I and Type II Errors

Because hypothesis tests are based on sample data, there is always a chance of making the wrong decision. There are exactly two types of errors:

  • Type I Error (False Positive): Rejecting H0H_0 when it is actually true. You conclude there is an effect when there really is not one. The probability of this error equals α\alpha.
  • Type II Error (False Negative): Failing to reject H0H_0 when it is actually false. You miss a real effect. The probability of this error is denoted β\beta.

The following table summarizes all possible outcomes:

H0H_0 is TrueH0H_0 is False
Reject H0H_0Type I Error (probability = α\alpha)Correct decision (probability = 1β1 - \beta)
Fail to reject H0H_0Correct decision (probability = 1α1 - \alpha)Type II Error (probability = β\beta)

Real-world consequences:

  • Type I Error in medicine: Approving a drug that does not actually work. Patients receive an ineffective treatment, and resources are wasted.
  • Type II Error in medicine: Failing to approve a drug that does work. Patients miss out on an effective treatment.

There is a trade-off between the two errors: lowering α\alpha (making it harder to reject H0H_0) reduces the risk of Type I errors but increases the risk of Type II errors. The only way to reduce both simultaneously is to increase the sample size.

Statistical Power

Power is the probability of correctly rejecting a false null hypothesis:

Power=1β\text{Power} = 1 - \beta

A test with high power is good at detecting real effects. A test with low power may miss real effects, leading to inconclusive results.

Three main factors affect power:

  1. Sample size — larger samples provide more information and increase power. This is the factor researchers have the most control over.
  2. Effect size — a larger true difference from the null value is easier to detect. You cannot control this, but you can design studies around the smallest effect you consider practically meaningful.
  3. Significance level (α\alpha) — a larger α\alpha (say 0.10 instead of 0.05) makes it easier to reject H0H_0, increasing power. However, this also increases the Type I error rate.

A commonly cited target for power is 0.80 (80%), meaning the test has an 80% chance of detecting a real effect if one exists. Power analysis — calculating the sample size needed to achieve a desired power — is a critical step in study planning.

Statistical Significance vs Practical Significance

A result can be statistically significant (small p-value) but practically meaningless. This happens when the sample size is very large: with enough data, even a tiny, unimportant difference can produce a p-value below 0.05.

For example, suppose a study of 100,000 patients finds that a new blood pressure medication lowers systolic pressure by 0.5 mmHg compared to the standard treatment, with a p-value of 0.001. The result is statistically significant — but a 0.5 mmHg reduction has virtually no clinical impact. No doctor would change their prescribing based on such a small difference.

Always consider the effect size alongside the p-value. Ask: “Is the observed difference large enough to matter in practice?” Report confidence intervals whenever possible — they convey both the direction and the magnitude of the effect, which a p-value alone cannot do.

The Steps of a Hypothesis Test

Every hypothesis test follows the same seven-step framework:

  1. State H0H_0 and HaH_a — define the null and alternative hypotheses in terms of a population parameter
  2. Choose the significance level α\alpha — typically 0.05 unless there is a reason to use a different threshold
  3. Check conditions — verify that the sample is random, observations are independent, and the appropriate distributional assumptions are met
  4. Calculate the test statistic — measure how far the sample result is from the null value, in standard error units
  5. Find the p-value — determine the probability of observing a result this extreme (or more extreme) if H0H_0 is true
  6. Make a decision — compare the p-value to α\alpha and reject or fail to reject H0H_0
  7. State the conclusion in context — translate the statistical decision into a plain-language statement about the original research question

Following this framework ensures that every test is conducted systematically and transparently.

Real-World Application: Nursing — Evaluating ER Wait Times

A hospital administration claims that the average emergency room wait time is 30 minutes. A nursing researcher suspects the actual wait time is longer. She collects data from a random sample of n=50n = 50 patients and finds xˉ=34\bar{x} = 34 minutes and s=10s = 10 minutes. Is there evidence that the average wait time exceeds 30 minutes? Use α=0.05\alpha = 0.05.

Step 1: State the hypotheses.

H0:μ=30(wait time is 30 minutes as claimed)H_0: \mu = 30 \quad \text{(wait time is 30 minutes as claimed)}

Ha:μ>30(wait time exceeds 30 minutes — one-sided)H_a: \mu > 30 \quad \text{(wait time exceeds 30 minutes — one-sided)}

Step 2: Significance level: α=0.05\alpha = 0.05.

Step 3: Check conditions. Random sample ✓. Independence (50 patients is less than 10% of all ER patients) ✓. Sample size n=5030n = 50 \geq 30, so the CLT applies ✓.

Step 4: Calculate the test statistic.

SE=sn=1050=107.071=1.414SE = \frac{s}{\sqrt{n}} = \frac{10}{\sqrt{50}} = \frac{10}{7.071} = 1.414

t=xˉμ0SE=34301.414=41.414=2.83t = \frac{\bar{x} - \mu_0}{SE} = \frac{34 - 30}{1.414} = \frac{4}{1.414} = 2.83

Step 5: Find the p-value. With df=49df = 49 and t=2.83t = 2.83 (one-sided, upper tail):

p-value0.003\text{p-value} \approx 0.003

Step 6: Decision. Since 0.0030.050.003 \leq 0.05, we reject H0H_0.

Step 7: Conclusion in context. There is statistically significant evidence at the α=0.05\alpha = 0.05 level that the average ER wait time exceeds 30 minutes. The sample data suggests the true mean is around 34 minutes. The hospital should investigate causes of the longer wait times and consider operational changes.

Clinical note: This finding is also practically significant — a 4-minute increase over the claimed wait time represents a meaningful difference in patient experience and potential triage delays, especially during peak hours.

Practice Problems

Test your understanding with these problems. Click to reveal each answer.

Problem 1: A company claims that 90% of its orders ship on time. A consumer group surveys 200 recent orders and finds that 168 shipped on time. Is there evidence that the on-time rate is less than 90%? (α=0.05\alpha = 0.05)

H0:p=0.90H_0: p = 0.90, Ha:p<0.90H_a: p < 0.90 (one-sided, lower tail)

p^=168200=0.84\hat{p} = \frac{168}{200} = 0.84

SE=0.90×0.10200=0.09200=0.00045=0.02121SE = \sqrt{\frac{0.90 \times 0.10}{200}} = \sqrt{\frac{0.09}{200}} = \sqrt{0.00045} = 0.02121

z=0.840.900.02121=0.060.02121=2.83z = \frac{0.84 - 0.90}{0.02121} = \frac{-0.06}{0.02121} = -2.83

p-value=P(Z<2.83)=0.0023\text{p-value} = P(Z < -2.83) = 0.0023

Since 0.0023<0.050.0023 < 0.05, reject H0H_0.

Answer: There is statistically significant evidence that the on-time shipping rate is less than 90%. The sample estimate of 84% suggests a meaningful shortfall.

Problem 2: A coin is flipped 200 times and lands heads 108 times. Test whether the coin is fair at α=0.05\alpha = 0.05.

H0:p=0.50H_0: p = 0.50, Ha:p0.50H_a: p \neq 0.50 (two-sided)

p^=108200=0.54\hat{p} = \frac{108}{200} = 0.54

SE=0.50×0.50200=0.00125=0.03536SE = \sqrt{\frac{0.50 \times 0.50}{200}} = \sqrt{0.00125} = 0.03536

z=0.540.500.03536=0.040.03536=1.13z = \frac{0.54 - 0.50}{0.03536} = \frac{0.04}{0.03536} = 1.13

p-value=2×P(Z>1.13)=2×0.1292=0.2584\text{p-value} = 2 \times P(Z > 1.13) = 2 \times 0.1292 = 0.2584

Since 0.2584>0.050.2584 > 0.05, fail to reject H0H_0.

Answer: There is not sufficient evidence to conclude the coin is unfair. Getting 108 heads out of 200 flips is well within the range of normal variation for a fair coin.

Problem 3: Identify the type of error in each scenario: (a) A drug is approved based on trial data, but later turns out to be ineffective. (b) A useful teaching method is dismissed because a small study found no significant improvement.

(a) This is a Type I Error (false positive). The null hypothesis (“the drug has no effect”) was true, but it was rejected based on the sample data.

(b) This is a Type II Error (false negative). The null hypothesis (“the teaching method has no effect”) was false (the method does work), but the study failed to reject it — likely due to low power from the small sample size.

Problem 4: A nutritionist believes the average daily calorie intake of college students exceeds 2,000 calories. A sample of 35 students shows xˉ=2,150\bar{x} = 2{,}150 and s=400s = 400. Test at α=0.01\alpha = 0.01.

H0:μ=2000H_0: \mu = 2000, Ha:μ>2000H_a: \mu > 2000 (one-sided)

SE=40035=4005.916=67.61SE = \frac{400}{\sqrt{35}} = \frac{400}{5.916} = 67.61

t=2150200067.61=15067.61=2.22t = \frac{2150 - 2000}{67.61} = \frac{150}{67.61} = 2.22

With df=34df = 34, P(t>2.22)0.017P(t > 2.22) \approx 0.017.

Since 0.017>0.010.017 > 0.01, fail to reject H0H_0.

Answer: At the α=0.01\alpha = 0.01 level, there is not sufficient evidence to conclude that the mean daily calorie intake exceeds 2,000 calories. Note: at α=0.05\alpha = 0.05, this result would be significant — the choice of significance level matters.

Problem 5: A hospital’s historical infection rate is 5%. After implementing a new hygiene protocol, a sample of 500 patients shows 16 infections. Is there evidence the rate has decreased? (α=0.05\alpha = 0.05)

H0:p=0.05H_0: p = 0.05, Ha:p<0.05H_a: p < 0.05 (one-sided, lower tail)

p^=16500=0.032\hat{p} = \frac{16}{500} = 0.032

SE=0.05×0.95500=0.0475500=0.000095=0.00975SE = \sqrt{\frac{0.05 \times 0.95}{500}} = \sqrt{\frac{0.0475}{500}} = \sqrt{0.000095} = 0.00975

z=0.0320.050.00975=0.0180.00975=1.85z = \frac{0.032 - 0.05}{0.00975} = \frac{-0.018}{0.00975} = -1.85

p-value=P(Z<1.85)=0.0322\text{p-value} = P(Z < -1.85) = 0.0322

Since 0.0322<0.050.0322 < 0.05, reject H0H_0.

Answer: There is statistically significant evidence that the infection rate has decreased below 5% after the new protocol. The sample rate of 3.2% represents a meaningful improvement, supporting continued use of the new hygiene procedures.

Key Takeaways

  • Hypothesis testing is a structured framework for using sample data to make decisions about population parameters
  • The null hypothesis (H0H_0) represents the status quo (no effect, no difference); the alternative hypothesis (HaH_a) represents what you want to demonstrate
  • The test statistic measures how far the sample result is from the null value in standard error units
  • The p-value is the probability of observing data as extreme as yours if H0H_0 were true — a small p-value provides evidence against H0H_0
  • Compare the p-value to your significance level α\alpha: if p-value α\leq \alpha, reject H0H_0; otherwise, fail to reject
  • Type I Error (false positive, probability α\alpha) means rejecting a true H0H_0; Type II Error (false negative, probability β\beta) means failing to reject a false H0H_0
  • Power (1β1 - \beta) is the ability to detect a real effect — it increases with larger sample sizes, larger effect sizes, and larger α\alpha
  • Statistical significance does not imply practical significance — always consider the effect size and real-world context alongside the p-value
  • In healthcare, hypothesis tests help evaluate new treatments, monitor quality metrics, and make evidence-based decisions that directly affect patient care

Return to Statistics for more topics in this section.

Last updated: March 29, 2026