Linear Regression
Medication dosages, IV drip rates, vital monitoring
Discounts, tax, tips, profit margins
Linear regression finds the straight line that best fits a set of data points. While correlation tells you how strong the linear relationship is, regression gives you the actual equation of the line β which lets you describe the relationship mathematically and make predictions. If you know a student studied 4.5 hours, regression tells you the predicted test score. This is one of the most widely used tools in all of statistics, from predicting sales revenue to estimating patient recovery times.
The Least-Squares Regression Line
The least-squares regression line (also called the line of best fit) is the line that minimizes the sum of the squared vertical distances from each data point to the line. In other words, it is the line where the total squared prediction error is as small as possible.
The equation of the regression line is:
Where:
- (βy-hatβ) is the predicted value of for a given
- is the slope β the predicted change in for each one-unit increase in
- is the y-intercept β the predicted value of when
Formulas for Slope and Intercept
The slope is calculated from the data:
The intercept is calculated using the slope and the means of and :
Notice that the numerator of the slope formula is the same as the numerator of the correlation coefficient formula. The difference is in the denominator: the slope uses only the -part of the denominator from the formula.
Calculating the Regression Line
Example 1: Study Hours vs Test Score
Using the same dataset from the correlation page β six students who reported their study hours and test scores:
| Hours () | Score () |
|---|---|
| 2 | 65 |
| 3 | 70 |
| 5 | 80 |
| 4 | 75 |
| 6 | 85 |
| 1 | 58 |
From the correlation calculation, we already know: , , , , .
Step 1: Calculate the slope.
Step 2: Calculate the means.
Step 3: Calculate the intercept.
Step 4: Write the regression equation.
This equation describes the best-fitting line through the six data points.
Interpreting Slope and Intercept
Always interpret the slope and intercept in context β using the actual variable names and units from the problem.
Slope Interpretation
The slope means: for each additional hour of study, the predicted test score increases by 5.29 points.
Notice the careful wording: βpredictedβ score, not βactualβ score. The regression line gives predictions β individual students may score higher or lower than the prediction.
Intercept Interpretation
The intercept means: a student who studies 0 hours is predicted to score 53.67 on the test.
However, be cautious about interpreting the intercept if falls outside the range of your data. In this dataset, the lowest study time is 1 hour. The intercept is a mathematical necessity for defining the line, but it may not represent a meaningful real-world scenario.
Making Predictions
Once you have the regression equation, you can predict for any value of by substituting into the equation.
Example 2: Predict the Score for 4.5 Hours of Study
A student who studies 4.5 hours is predicted to score approximately 77.5 on the test.
Interpolation vs Extrapolation
- Interpolation: Predicting within the range of the data (here, between 1 and 6). These predictions are generally reliable.
- Extrapolation: Predicting outside the range of the data (e.g., predicting the score for hours). Extrapolation is risky because the linear pattern may not continue beyond the observed data. A student studying 15 hours might experience diminishing returns from fatigue β the linear model cannot capture that.
Rule of thumb: Only use the regression line for predictions within (or very close to) the range of -values in your dataset.
Residuals
A residual is the difference between what actually happened and what the regression line predicted:
- A positive residual means the actual value was higher than predicted (the point lies above the line)
- A negative residual means the actual value was lower than predicted (the point lies below the line)
- A residual of zero means the prediction was perfect (the point lies exactly on the line)
Example 3: Residual Table
Using , compute the predicted score and residual for each student:
| (observed) | (predicted) | Residual () | |
|---|---|---|---|
| 2 | 65 | ||
| 3 | 70 | ||
| 5 | 80 | ||
| 4 | 75 | ||
| 6 | 85 | ||
| 1 | 58 |
Notice that the residuals are a mix of positive and negative values, and they are all quite small β which makes sense given the very strong correlation () for this dataset.
Key property: The sum of all residuals for a least-squares regression line always equals zero (or very close to zero due to rounding): (the small deviation is from rounding the slope and intercept).
Visualizing the Regression Line
The scatter plot below shows the six data points with the least-squares regression line drawn through them. Each teal dot is an observed data point, and the blue line is .
Study Hours vs Test Score with Regression Line
The teal dots represent observed scores, and the blue line is the least-squares fit. Notice how closely the points hug the line β reflecting the very high correlation ().
R-Squared: How Good Is the Fit?
The coefficient of determination tells you how well the regression line fits the data. It represents the proportion of the total variation in that is explained by the linear relationship with .
For our study hours example:
This means 99.6% of the variation in test scores is explained by the linear relationship with study hours. Only 0.4% of the variation remains unexplained.
Interpreting R-squared Values
| Interpretation | |
|---|---|
| 0.90 and above | Excellent fit β the line captures nearly all variation |
| 0.70 to 0.90 | Good fit β most variation is explained |
| 0.40 to 0.70 | Moderate fit β some variation explained |
| Below 0.40 | Poor fit β most variation is unexplained |
A low does not necessarily mean the model is useless β it means other variables (not included in the model) also affect . A high does not prove causation; it only confirms that the linear equation describes the pattern well.
Residual Plots: Checking Model Assumptions
A residual plot graphs the residuals (vertical axis) against the -values or predicted values (horizontal axis). It is the most important diagnostic tool for evaluating whether a linear model is appropriate.
What to look for:
- Good pattern (linear model is appropriate): The residuals scatter randomly above and below zero with no obvious pattern, and the spread stays roughly constant across all -values.
- Bad pattern β curved trend: If the residuals show a U-shape or inverted-U shape, the true relationship is probably curved, not linear. A linear model is not appropriate.
- Bad pattern β fan shape: If the residuals spread out (or narrow) as increases, the equal-variance assumption is violated. This is called heteroscedasticity.
Key principle: Even if is high, always check the residual plot. A high does not guarantee that a linear model is the right choice β it is possible to get a deceptively high when the true relationship is curved.
Real-World Application: Nursing β Predicting Patient Recovery Time from Age
A hospital collects data on 8 patients who underwent the same knee replacement surgery, recording each patientβs age and recovery time (days until discharge):
| Age () | Recovery Days () |
|---|---|
| 30 | 3 |
| 38 | 4 |
| 42 | 4 |
| 45 | 5 |
| 52 | 6 |
| 55 | 7 |
| 60 | 8 |
| 65 | 9 |
A nurse researcher runs a regression and obtains (with ).
Interpreting the slope: For each additional year of age, the predicted recovery time increases by 0.178 days. Equivalently, for every 5 to 6 additional years of age, recovery is predicted to increase by about 1 day.
Prediction: A 50-year-old patient would be predicted to need days, or roughly 6 days for recovery.
Clinical use: This regression helps nurses plan staffing and discharge logistics. However, individual patients vary based on fitness, complications, and other factors that the model does not capture (the 3% unexplained variation, plus unmeasured variables).
Caution about extrapolation: Predicting recovery time for a 20-year-old ( days) or a 90-year-old ( days) may be unreliable because those ages fall outside the observed data range.
Practice Problems
Test your understanding with these problems. Click to reveal each answer.
Problem 1: Given , , , , . Find the equation of the least-squares regression line.
Step 1: Calculate the slope.
Step 2: Calculate the means.
Step 3: Calculate the intercept.
Answer:
For each one-unit increase in , is predicted to increase by 2.0 units.
Problem 2: A regression equation is , where is the number of absences and is the final exam score. (a) Interpret the slope. (b) Predict the score for a student with 8 absences. (c) Would you trust a prediction for 40 absences?
(a) For each additional absence, the predicted final exam score decreases by 3.5 points.
(b) . A student with 8 absences is predicted to score 92.
(c) No. Predicting for 40 absences gives , which is a negative score β impossible. This is extrapolation far beyond the likely range of the data, and the linear trend clearly breaks down. The model is only valid within the range of -values in the original dataset.
Problem 3: A student scores 82 on an exam. The regression equation predicts for that studentβs study hours. Calculate and interpret the residual.
Interpretation: The student scored 4 points higher than the regression model predicted. The residual is positive, meaning the actual score was above the regression line.
Problem 4: A regression model has . What does this tell you? If is positive, what is the correlation coefficient?
means that 64% of the variation in the response variable is explained by the linear relationship with the explanatory variable. The remaining 36% is due to other factors.
If is positive:
Answer: The correlation coefficient is , indicating a strong positive linear relationship.
Problem 5: A residual plot shows residuals that form a clear pattern: positive residuals on the left, negative in the middle, and positive on the right. What does this tell you about the linear model?
A U-shaped residual plot indicates that the linear model is not appropriate for this data. The true relationship between and is curved (likely quadratic or some other nonlinear form), not a straight line.
What to do: Consider fitting a nonlinear model (such as a quadratic regression), or transform one of the variables (such as taking the log or square root of or ) to straighten the relationship before fitting a line.
Even if the for the linear model appears decent, the systematic pattern in the residuals tells you the model is missing an important feature of the data.
Key Takeaways
- The least-squares regression line minimizes the sum of squared residuals and provides the best linear fit to the data
- The slope describes the predicted change in for each one-unit increase in β always interpret it in context using the actual variable names
- The intercept is the predicted when β it may or may not have a meaningful real-world interpretation
- Residuals () measure how far each observed value falls from the regression line; they always sum to approximately zero
- Interpolation (predicting within the data range) is generally reliable; extrapolation (predicting outside the data range) is risky and should be avoided
- tells you the proportion of variation in explained by the linear model β but a high does not prove causation or guarantee the model is appropriate
- Always examine a residual plot to check whether the linear model is a good fit β look for random scatter with constant spread
- In healthcare and other applied fields, regression helps make predictions, but individual outcomes vary based on factors not captured by the model
Return to Statistics for more topics in this section.
Next Up in Statistics
All Statistics topicsLast updated: March 29, 2026