Understanding Standard Deviation vs Standard Error: What’s the Difference?

08 Jun 2025 - tsp
Last update 09 Jun 2025
Reading time 2 mins

If you’ve ever worked with data, you’ve probably come across two terms that sound similar but mean different things: standard deviation $\sigma$ and standard error SE. Both involve measuring variability, but they serve very different purposes. A few days ago, I noticed not everyone has this distinction always on the back of their minds, so I wanted to write a very short summary. Let’s break down these concepts with clarity and practical relevance.

What Is Standard Deviation?

Standard deviation ($\sigma$) tells you how spread out the values in your dataset are.

The formula for the standard deviation (for a sample) is:

[ \sigma = \sqrt{\frac{1}{n-1} \sum_{i=1}^{n} \left(x_i - \bar{x}\right)^2} ]

This means we calculate the mean of the squared differences between each data point $x_i$ and the sample mean $\bar{x}$, and then take the square root. Conceptually, it gives us the average distance of the data points from the mean, expressed in the same units as the original data.

You may have learned a version of this formula in school where the denominator is just $n$ instead of $n-1$. That version is used when describing the spread of an entire population, assuming all data is known. But when working with samples (which is almost always the case), using $n−1$ applies Bessel’s correction, which corrects the bias in estimating population variance from sample data.

Imagine you measure the heights of 100 people. If most people are close to the average (say, 170 cm), your standard deviation will be small. If heights vary widely—some are much taller or shorter—your $\sigma$ will be larger.

Example: In a sample of 100 people, the average height is 170 cm. If the standard deviation is 10 cm, 68% people are between 160 cm and 180 cm.

It’s a direct reflection of how much individual data points differ from the mean.

What Is Standard Error?

Standard error (SE), on the other hand, tells you how accurate your estimate of the mean is. Specifically, it shows how much your sample mean would vary if you repeated your experiment many times with new samples.

It’s calculated by:

[ SE = \frac{\sigma}{\sqrt{n}} ]

Where $n$ is the number of observations.

So the larger your sample, the smaller your standard error - and the more precise your estimate of the population mean.

Example: Same height dataset: $\sigma = 10 cm$, sample size $n = 100$. The Standard Error $SE = \frac{10}{\sqrt{100}} = 1 cm$ $\to$ You can be more confident that the true average height is close to your sample’s 170 cm.

Key Difference / Summary

Concept Standard Deviation ($\sigma$) Standard Error (SE)
What it measures Variability of data Precision of the mean estimate
Affected by $n$? ❌ No ✅ Yes – larger n reduces SE
Use it for… Describing how spread-out your data is Estimating how accurate your mean is

Think of standard deviation $\sigma$ as telling you about the spread of your data, and standard error SE as telling you how sure you can be about your average. Mixing the two up can lead to misunderstandings.

This article is tagged: Statistics, Basics, Tutorial, Math, Measurements


Data protection policy

Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)

This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/

Valid HTML 4.01 Strict Powered by FreeBSD IPv6 support