Some Gaussian integrals in 1 dimension

11 Sep 2020 - tsp
Last update 15 Sep 2020
Reading time 15 mins

This short article just provides a mini reference / summary about some commonly occurring Gaussian integrals. These are summarized here since sometimes my students ask how some factors are calculated and how one does such calculations for statistics usage at high school levels. These are not really challenging to solve but usually not shown in most school textbooks.

The basic integral analytically used

First let’s look at the most simple integral

[ \int x * e^{-x^2} dt ]

This is easily solvable if one looks at the derivative of the exponential part $e^{-x^2}$ alone that’s easily calculate able via the chain rule:

[ \frac{\partial}{\partial x}e^{-x^2} = -2*x * e^{-x^2} \\ \to x * e^{-x^2} = -\frac{1}{2} \frac{\partial}{\partial x}e^{-x^2} ]

Inserting this expression into the integral

[ \int x * e^{-x^2} dt = -\frac{1}{2} \int \frac{\partial}{\partial x} e^{-x^2} dx ]

and using the definition of the integral as the inverse of the differential one can easily calculate:

[ -\frac{1}{2} \int \frac{\partial}{\partial x} e^{-x^2} dx = -\frac{1}{2} e^{-x^2} ]

Integral from zero to infinity

[ \int_{0}^{\infty} e^{-u^2} du ]

This integral is more tricky to solve. One will use an approach using lower and upper boundaries. First one can define a simple integral up to a finite boundary:

[ I_b = \int_{0}^{b} e^{-u^2} du ]

This allows the definition of the squared quantity $I_b$:

[ I_b^2 = \int_{0}^{b} e^{-x^2} dx * \int_{0}^{b} e^{-y^2} dy \\ I_b^2 = \int_0^b \int_0^b e^{-x^2-y^2} dx dy \\ I_b^2 = \int_0^b \int_0^b e^{-(x^2+y^2)} dx dy ]

This is an integral over a two dimensional square ranging from $0$ to $b$ on the first axis and from $0$ to $b$ over the second axis. One can clearly see that a quarter circle with an radius of $b$ has lower area, a quarter circle with an radius of $\sqrt{2} * b$ has larger area. Thus they form a lower and upper bound for the integral over the square and will be termed $A1$ and $A2$ from now on:

Integration strategy used for upper and lower boundary

[ \int \int_{A1} e^{-(x^2+y^2)} dx dy \leq I_b^2 \leq \int \int_{A2} e^{-(x^2+y^2)} dx dy ]

Transforming from Cartesian to cylindrical coordinates leads to

[ \int_{0}^{\frac{\pi}{2}} \int_0^b e^{-r^2} r dr d\theta \leq I_b^2 \leq \int_{0}^{\frac{\pi}{2}} \int_{0}^{\sqrt{2}b} e^{-r^2} r dr d\theta ]

The integral

[ \int_{0}^{\frac{\pi}{2}} \int_0^b e^{-r^2} r dr d\theta ]

is pretty easy to solve (see above):

[ \int_{0}^{\frac{\pi}{2}} \int_0^b e^{-r^2} r dr d\theta \\ = \int_{0}^{\frac{\pi}{2}} d\theta * \int_0^b e^{-r^2} r dr \\ = \underbrace{\theta \mid_{0}^{\frac{\pi}{2}}}_{\frac{\pi}{2}} * -\frac{1}{2} e^{-r^2} \mid_0^b \\ = - \frac{\pi}{2} * \frac{1}{2} * \left(e^{-b^2} - e^{0}\right) \\ = -\frac{\pi}{4} \left(e^{-b^2} - 1\right) \\ = \frac{\pi}{4} \left(1 - e^{-b^2}\right) ]

This leads to the easy expression

[ \frac{\pi}{4} \left(1 - e^{-b^2}\right) \leq I_b^2 \leq \frac{\pi}{4} \left(1 - e^{-2*b^2}\right) ]

Now one can easily extend the integral to infinity by using $\lim_{b\to\infty}$:

[ \lim_{b\to\infty} \frac{\pi}{4} \left(1 - \underbrace{e^{-b^2}}_{0}\right) \leq I_b^2 \leq \lim_{b\to\infty} \frac{\pi}{4} \left(1 - \underbrace{e^{-2*b^2}}_{0}\right) \\ = \frac{\pi}{4} \leq I_b^2 \leq \frac{\pi}{4} ]

As one can see in the limit of infinity the lower and upper bound are equal which allows to identify that:

[ I_b^2 = \frac{\pi}{4} \\ \to I_b = \frac{\sqrt{\pi}}{2} \\ \int_0^{\infty} e^{-t^2} dt = \frac{\sqrt{\pi}}{2} ]

One can ask why the sign of the integral has been chosen to be positive during the taking of the square root. Using the definition of the Lebesgue integral one can argue that the exponential is always positive and the summation process will only lead to positive numbers. Note that this argument is not sufficient for a mathematician though.

The result will be used later on:

[ \int_0^{\infty} e^{-t^2} dt = \frac{\sqrt{\pi}}{2} ]

Of course this also solves the symmetric integral:

[ \int_{-\infty}^{\infty} e^{-t^2} dt = 2 * \int_{0}^{\infty} e^{-t^2} dt = \sqrt{\pi} ]

Normalization of the standardized Gaussian function

The standardized Gaussian function simply is

[ f_1(x) = e^{-\frac{1}{2} * x^2} ]

In statistics and many other application a function is required to be normalized, i.e. one requires a factor that assures that

[ \int_{-\infty}^{\infty} c * f_1(x) dx = 1 ]

For a probability distribution one can imagine that this condition requires that any of the possible outcomes of an experiment will happen for sure.

Using the identity from before

[ \int_0^{\infty} e^{-t^2} dt = \frac{\sqrt{\pi}}{2} ]

and first exploiting the symmetry, introducing the coordinate change of $u = \frac{1}{\sqrt{2}} x$ which also leads to the coordinate transformation $dx \to \sqrt{2} du$, one can calculate the normalization factor to be $\frac{1}{\sqrt{2\pi}}$

[ \int_{-\infty}^{\infty} c * e^{-\frac{1}{2} x^2} dx = 1 \\ c * \int_{-\infty}^{\infty} e^{-\frac{1}{2} x^2} dx = 1 \\ c * 2 * \int_{0}^{\infty} e^{-\frac{1}{2} x^2} dx = 1 \\ c * 2 * \int_{0}^{\infty} e^{-\frac{1}{2} x^2} dx = 1 \\ c * 2 * \sqrt{2} * \int_{0}^{\infty} e^{-u^2} du = 1 \\ c * 2 * \sqrt{2} * \frac{\sqrt{\pi}}{2} = 1 \\ c * \sqrt{2\pi} = 1 \\ c = \frac{1}{\sqrt{2\pi}} ]

Using this one can define the normalized Gaussian function to be

[ No(z) = \frac{1}{\sqrt{2\pi}} * e^{-\frac{1}{2} z^2} ]

Note that this also leads to knowledge, that

[ \int_{-\infty}^{\infty} e^{-\frac{1}{2} z^2} dz = \sqrt{2\pi} ]

Expectation value of the normalized standard distribution

One can also simply show that the expectation value of a normalized standardized Gaussian distribution is zero as one would expect:

[ E(z) = \int_{-\infty}^{\infty} z * \frac{1}{\sqrt{2\pi}} * e^{-\frac{1}{2} z^2} dz \\ = \frac{1}{\sqrt{2\pi}} * \int_{-\infty}^{\infty} z * e^{-\frac{1}{2} z^2} dz \\ = - \frac{1}{\sqrt{2\pi}} * \int_{-\infty}^{\infty} \frac{\partial}{\partial z} e^{-\frac{1}{2} z^2} dz \\ = - \frac{1}{\sqrt{2\pi}} * \left( e^{-\frac{1}{2} * z^2} \right) \mid_{-\infty}^{\infty} \\ = -\frac{1}{\sqrt{2\pi}} * \left(\underbrace{\lim_{z\to\infty} e^{-\frac{1}{2} z^2}}_{0} - \underbrace{\lim_{z\to -\infty} e^{-\frac{1}{2} z^2}}_{0} \right) \\ = 0 ]

Normalized Gaussian distribution

Normalization factor for non normalized Gaussian distribution

The normalization factor for the generic Gaussian distribution

[ e^{-\frac{1}{2} \left(\frac{x-\mu}{\sigma}\right)^2} ]

is calculated simply by evaluating:

[ \int_{-\infty}^{\infty} c_3 * e^{-\frac{1}{2} * \left(\frac{x-\mu}{\sigma}\right)^2} dx = 1 \\ c_3 * \int_{-\infty}^{\infty} * e^{-\frac{1}{2} * \left(\frac{x-\mu}{\sigma}\right)^2} dx = 1 \\ c_3 * \sigma * \underbrace{\int_{-\infty}^{\infty} e^{-\frac{1}{2} z^2} dz}_{\sqrt{2\pi}} = 1 \\ c_3 * \sigma * \sqrt{2\pi} = 1 \\ \to c_3 = \frac{1}{\sigma * \sqrt{2\pi}} ]

This leads to the normalized Gaussian distribution function:

[ No(x;\mu,\sigma) = \frac{1}{\sigma * \sqrt{2\pi}} * e^{-\frac{1}{2} * \left(\frac{x-\mu}{\sigma}\right)^2} ]

Expectation value of a non normalized generic Gaussian distribution

[ E(x) = \int_{-\infty}^{\infty} x * \frac{1}{\sigma * \sqrt{2\pi}} e^{-\frac{1}{2} \left(\frac{x-\mu}{\sigma}\right)^2} dx ]

This factor can be calculated by directly calculating the integrals after performing the usual standardization:

[ z = \frac{x-\mu}{\sigma} ]

Note that this of course also transforms to $\frac{\partial z}{\partial x} = \frac{1}{\sigma}$ and such $dx = dz * \sigma$:

[ E(x) = \int_{-\infty}^{\infty} (z\sigma + \mu) * \frac{1}{\sigma * \sqrt{2\pi}} e^{-\frac{1}{2} z^2} \sigma dz \\ E(x) = \frac{1}{\sigma * \sqrt{2\pi}} * \sigma^2 \int_{-\infty}^{\infty} z * e^{-\frac{1}{2} z^2} dz + \mu \sigma * \frac{1}{\sigma * \sqrt{2\pi}} * \underbrace{\int_{-\infty}^{\infty} e^{-\frac{1}{2} z^2} dz}_{\sqrt{2\pi}} \\ E(x) = \underbrace{- \frac{\sigma^2}{\sigma * \sqrt{2\pi}} * e^{-\frac{1}{2} z^2} \mid_{-\infty}^{\infty}}_{0} + \frac{\mu \sigma}{\sigma * \sqrt{2\pi}} * \sqrt{2\pi} \\ E(x) = \frac{\mu \sigma * \sqrt{2\pi}}{\sigma * \sqrt{2\pi}} = \mu \\ E(x) = \mu ]

Characteristic function of a standard gaussian distribution ($\mu=0, \sigma=1$)

The characteristic function is calculated by performing a fourier transform:

[ \phi(\omega) = \int_{-\infty}^{\infty} f(x) * e^{i \omega x} \text{d}x \\ \phi(\omega) = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2} x^2} * e^{i \omega x} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} x^2 + i \omega x} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} \left(x^2 - 2 * i \omega x\right) } \text{d}x ]

As one can now easily see one can expand to a full square as usual:

[ \phi(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} \left(x^2 - 2 * i \omega x + i^2 \omega^2 - i^2 \omega^2 \right) } \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} (x - i \omega)^2} * e^{\frac{i^2 \omega^2}{2}} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} (x - i \omega)^2} * e^{-\frac{\omega^2}{2}} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi}} e^{-\frac{\omega^2}{2}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * (x - i \omega)^2} \text{d}x ]

Now one can simply substitute using $z^2 = (x- i \omega)^2$ which leads to

[ z^2 = (x - i \omega)^2 \\ z = x - i \omega \\ \frac{\partial z}{\partial x} = 1 \\ \to \partial x = \partial z ]

Using the substitution one can see an integral like calculated earlier:

[ \phi(\omega) = \frac{1}{\sqrt{2\pi}} e^{-\frac{\omega^2}{2}} \underbrace{\int_{-\infty}^{\infty} e^{-\frac{1}{2} * z^2} \text{d}z}_{\sqrt{2 * \pi}} \\ \phi(\omega) = \frac{1}{\sqrt{2\pi}} * e^{-\frac{1}{2} \omega^2} * \sqrt{2*\pi} \\ \phi(\omega) = e^{-\frac{\omega^2}{2}} ]

Characteristic function of a normal gaussian distribution

The characteristic function is calculated analogue to the characteristic function of the standardized gaussian distribution above:

[ \phi(\omega) = \int_{-\infty}^{\infty} f(x) * e^{i \omega x} \text{d}x \\ \phi(\omega) = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi} * \sigma} * e^{-\frac{1}{2} * \left(\frac{x-\mu}{\sigma}\right)^2} * e^{i \omega x} \text{d}x \\ \phi(\omega) = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi} * \sigma} * e^{-\frac{1}{2} * \left(\frac{x^2-2 x \mu + \mu^2}{\sigma^2}\right)} * e^{i \omega x} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi} * \sigma} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * \left(\frac{x^2-2 x \mu + \mu^2}{\sigma^2}\right) + i \omega x} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi} * \sigma} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * \left(\frac{x^2-2 x \mu + \mu^2}{\sigma^2} - 2 i \omega x\right)} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi} * \sigma} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * \left(\frac{x^2-2 x \mu + \mu^2 - 2 i \omega x \sigma^2}{\sigma^2}\right)} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi} * \sigma} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * \left(\frac{x^2 - 2x(\mu + i \omega \sigma^2) + \mu^2}{\sigma^2}\right)} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi} * \sigma} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * \left(\frac{x^2 - 2x(\mu + i \omega \sigma^2)}{\sigma^2}\right)} * e^{-\frac{1}{2} \frac{\mu^2}{\sigma^2}} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi} * \sigma} e^{-\frac{1}{2} \frac{\mu^2}{\sigma^2}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * \left(\frac{x^2 - 2x(\mu + i \omega \sigma^2 + (\mu + i \omega \sigma^2)^2 - (\mu + i \omega \sigma^2)^2)}{\sigma^2}\right)} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi} * \sigma} e^{-\frac{1}{2} \frac{\mu^2}{\sigma^2}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * \frac{(x - (\mu + i \omega \ sigma^2))^2}{\sigma^2} + \frac{1}{2} \frac{(\mu + i \omega \sigma^2)^2}{\sigma^2}} \text{d}x \\ \phi(\omega) = \frac{1}{\sqrt{2\pi} * \sigma} e^{-\frac{1}{2} \frac{\mu^2}{\sigma^2}} e^{\frac{1}{2} \frac{(\mu + i \omega \sigma^2)^2}{\sqrt{\sigma^2}}} \int_{-\infty}^{\infty} e^{-\frac{1}{2} * \frac{(x - (\mu + i \omega \ sigma^2))^2}{\sigma^2}} \text{d}x ]

Again one can perform a simple substitution

[ z^2 = \left(\frac{x - (\mu + i \omega \sigma^2)}{\sigma}\right)^2 \\ z = \frac{x - (\mu + i \omega \sigma^2)}{\sigma} \\ \frac{\partial z}{\partial x} = \frac{1}{\sigma} \to \sigma * \partial z = \partial x ]

Using this substitution one can recover a classical gaussian integral

[ \phi(\omega) = \frac{1}{\sqrt{2 \pi \sigma^2}} * e^{-\frac{1}{2} * \left(\frac{\mu^2}{\sigma^2} - \frac{(\mu + i \omega \sigma^2)^2}{\sigma^2} \right)} \underbrace{\int_{-\infty}^{\infty} e^{-\frac{1}{2} z^2} \text{d}z}_{\sqrt{2\pi}} \sigma \\ \phi(\omega) = \frac{1}{\sqrt{2\pi \sigma^2}} * \sqrt{2 \pi \sigma^2} * e^{-\frac{1}{2} * \left(\frac{\mu^2 - (\mu^2 + 2\mu i \omega \sigma^2 + i^2 \omega^2 \sigma^4)}{\sigma^2}\right)} \\ \phi(\omega) = e^{\frac{1}{2} \frac{2 \mu i \omega \sigma^2 - \omega^2 \sigma^4}{\sigma^2}} = e^{\frac{1}{2} (2 \mu i \omega - \omega^2 \sigma^2)} \\ \phi(\omega) = e^{i \mu \omega} * e^{-\frac{1}{2} \omega^2 \sigma^2} ]

Not an integral: Maximum probable value (MPV) and inflection point of Gaussian distribution

The maximum probable value of the Gaussian function is really easy to calculate (and added to this blog post due to a request):

[ f(x) = \frac{1}{\sigma * \sqrt{2\pi}} * e^{-\frac{1}{2} * \left(\frac{x-\mu}{\sigma}\right)^2} \\ \to \frac{df(x)}{dx} = \frac{1}{\sigma * \sqrt{2\pi}} * e^{-\frac{1}{2} * \left(\frac{x-\mu}{\sigma}\right)^2} * (-\frac{x-\mu}{\sigma} * \frac{1}{\sigma}) \\ \to \frac{df(x)}{dx} = -f(x) * \frac{x-\mu}{\sigma^2} \\ \to \frac{d^2f(x)}{dx^2} = -(-f(x) * \frac{x-\mu}{\sigma^2}) * \frac{x-\mu}{\sigma^2} - f(x) * \frac{1}{\sigma^2} \\ \to \frac{d^2f(x)}{dx^2} = f(x) * \frac{(x-\mu)^2 - \sigma^2}{\sigma^4} ]

Now one can simply do standard extrema search by setting the first derivative to zero:

[ \frac{df(x)}{dx} = 0 \\ \to -f(x) * \frac{x-\mu}{\sigma^2} = 0 \to \frac{x-\mu}{\sigma^2} = 0 \\ \to x - \mu = 0 \\ \to x = \mu ]

As expected the (potential) extremum is located at the expectation value. Note that this is the case for Gaussian distribution functions - but not for all distribution functions (asymmetric functions such Landau for example have an MPV that’s different from the expectation value).

One can simply verify the potential extremum really is an extremum and really is a maximum by checking curvature at this location:

[ \frac{d^2f(\mu)}{dx^2} = f(\mu) * \frac{(\mu - \mu)^2 - \sigma^2}{\sigma^4} = \underbrace{\underbrace{f(\mu)}_{>0} * \underbrace{-\frac{1}{\sigma^2}}_{< 0}}_{<0} ]

As one can see the curvature is negative so one has really discovered a local maximum of the function - which is the maximum probable value (MPV).

The inflection points can be found - as usual - by looking at vanishing curvature, i.e. setting the second derivative to zero:

[ \frac{d^2f(x)}{dx^2} = f(x) * \frac{(x-\mu)^2 - \sigma^2}{\sigma^4} \\ f(x) * \frac{(x-\mu)^2 - \sigma^2}{\sigma^4} = 0 \\ \frac{(x-\mu)^2 - \sigma^2}{\sigma^4} = 0 \\ (x-\mu)^2 - \sigma^2 = 0 \\ (x-\mu)^2 = \sigma^2 \\ x - \mu = \pm \sigma \\ x = \mu \pm \sigma ]

As expected the inflection points are located at $\mu - \sigma$ and $\mu + \sigma$.

Product between two 1D Gaussians

Now as one often has to do calculations with distributions one might take a look at the result of the product between to Gaussians. This is often required during sensor data fusion or when calculating with measured data:

[ g_A(x) = \frac{1}{\sqrt{2 \pi \sigma_A^2}} * e^{-\frac{(x-\mu_A)^2}{2\sigma_A^2}} \\ g_B(x) = \frac{1}{\sqrt{2 \pi \sigma_B^2}} * e^{-\frac{(x-\mu_B)^2}{2\sigma_B^2}} \\ g_A(x) * g_B(x) = \frac{1}{2 \pi \sigma_A * \sigma_B} * e^{-\frac{1}{2} * \left( \frac{(x-\mu_A)^2}{\sigma_A^2} + \frac{(x-\mu_B)^2}{\sigma_B^2} \right)} ]

As one can see the exponent can simply be expanded:

[ \frac{(x-\mu_A)^2}{\sigma_A^2} + \frac{(x-\mu_B)^2}{\sigma_B^2} \\ \frac{x^2 - 2 * x * \mu_A + \mu_A^2}{\sigma_A^2} + \frac{x^2 - 2*x*\mu_B + \mu_B^2}{\sigma_B^2} \\ \frac{(x^2 - 2 * x * \mu_A + \mu_A^2) * \sigma_B^2}{\sigma_A^2} + \frac{(x^2 - 2 * x * \mu_B + \mu_B^2) * \sigma_A^2}{\sigma_B^2} \\ x^2 * \frac{\sigma_B^2 + \sigma_A^2}{\sigma_A^2 * \sigma_B^2} - 2 * x * \frac{\mu_A * \sigma_B^2 + \mu_B * \sigma_A^2}{\sigma_A^2 * \sigma_B^2} + \frac{\mu_A^2 \sigma_B^2 + \mu_B^2 + \sigma_A^2}{\sigma_A^2 * \sigma_B^2} ]

Comparison with $\frac{(x - \mu_C)^2}{\sigma_C^2} = \frac{x^2}{\sigma_C^2} - 2 * x * \frac{mu_C}{\sigma_C^2} + \frac{\mu_C^2}{\sigma_C^2}$ allows the determination of the coefficients $\sigma_C$ and $\mu_C$. First one can look at the first term $\frac{x^2}{\sigma_C^2}$ to derive an expression for the new deviation $\sigma_C$:

[ \frac{1}{\sigma_C^2} = \frac{\sigma_B^2 + \sigma_A^2}{\sigma_A^2 * \sigma_B^2} \\ \to \sigma_C^2 = \frac{\sigma_A^2 * \sigma_B^2}{\sigma_A^2 + \sigma_B^2} \\ \to \sigma_C = \frac{\sigma_A * \sigma_B}{\sqrt{\sigma_A^2 + \sigma_B^2}} ]

Using this knowledge one can look at the second term $- 2 * x * \frac{\mu_C}{\sigma_C^2}$:

[ \frac{\mu_C}{\sigma_C^2} = \frac{\mu_A * \sigma_B^2 + \mu_B * \sigma_A^2}{\sigma_A^2 * \sigma_B^2} \\ \to \mu_C = \frac{\mu_A * \sigma_B^2 + \mu_B * \sigma_A^2}{\sigma_A^2 * \sigma_B^2} * \sigma_C^2 \\ \mu_C = \frac{\mu_A * \sigma_B^2 + \mu_B * \sigma_A^2}{\sigma_A^2 * \sigma_B^2} * \underbrace{\frac{\sigma_A^2 * \sigma_B^2}{\sigma_A^2 + \sigma_B^2}}_{\sigma_C^2} \\ \to \mu_C = \frac{\mu_A * \sigma_B^2 + \mu_B * \sigma_A^2}{\sigma_A^2 + \sigma_B^2} ]

As one can see the new parameter $\mu_C$ - that is also the expectation value of the resulting distribution - is shifted in a weighted fashion towards the expectation value that had been determined with better accuracy (i.e. less deviation).

In short:

[ No(x;\mu_A,\sigma_A) * No(x;\mu_B,\sigma_B) \to No(x; \mu_C = \frac{\mu_A * \sigma_B^2 + \mu_B * \sigma_A^2}{\sigma_A^2 + \sigma_B^2}; \sigma_C = \frac{\sigma_A * \sigma_B}{\sqrt{\sigma_A^2 + \sigma_B^2}}) ]

Note that there is no equal sign present since the function has to be renormalized which is usually required to maintain properties of a distribution function. Note that there are indeed applications that require one to also use the scaling factor resulting from the multiplication. This can easily been seen when one looks at the scaling factor:

[ c_{mult} = \frac{1}{\sqrt{2\pi*\sigma_A^2}} * \frac{1}{\sqrt{2 \pi \sigma_B^2}} = \frac{1}{2 \pi \sigma_A \sigma_B} \\ c_{new} = \frac{1}{\sqrt{2 \pi \sigma_C^2}} \\ \sqrt{2 \pi} * \sigma_C \neq 2 \pi \sigma_A * \sigma_B ]

Adding Gaussian distributed numbers

What happens when one adds two random numbers - for example from a measurement process - and wants to determine their distribution? It’s important to note that the addition of two random variables is not the same as the addition of two distribution functions which would yield a mixture distribution of multiple Gaussians and not be Gaussian any more. The addition of two random variables on the other hand is still distributed as a Gaussian. One can immediately see that by looking at the product of the characteristic functions of both distributions - which is essentially a multiplication in phase space and thus a convolution in our values space.

[ X \sim No(\mu_x, \sigma_x) \\ Y \sim No(\mu_y, \sigma_y) \\ \phi_{x+y}(t) = \phi_{x}(t) * \phi_{y}(t) = e^{i t \mu_x - \frac{\sigma_x^2 t^2}{2}} * e^{i t \mu_y - \frac{\sigma_y^2 t^2}{2}} \\ \phi_{x+y}(t) = e^{i t (\mu_x + \mu_y) - \frac{t^2}{2}(\sigma_x^2 + \sigma_y^2)} ]

As one can see the result is the characteristic of a normal distribution with mean $\mu_x + \mu_y$ as well as a deviation of $\sqrt{\sigma_x^2 + \sigma_y^2}$.

[ X \sim No(\mu_x, \sigma_x) \\ Y \sim No(\mu_y, \sigma_y) \\ X+Y \sim No(\mu_x + \mu_y, \sqrt{\sigma_x^2 + \sigma_y^2}) ]

This article is tagged: Mathematics, Gaussian, Statistics


Data protection policy

Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)

This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/

Valid HTML 4.01 Strict Powered by FreeBSD IPv6 support