Detecting frequency, delay and phase relation as well as multiple targets using correlation functions

29 Dec 2022 - tsp
Last update 29 Dec 2022
Reading time 15 mins

First off this is again a small summary written by me that describes my personal physicist view on this topic. It’s not a clean mathematical treatment and contains no clean mathematical definitions of those topics. If you want to read up on those please refer to your favorite statistics or signal processing textbook.

What are correlation functions?

So first let’s summarize what a correlation function is. Basically it’s a function that tells one about the spatial or temporal statistical correlation between two functions. That means it tells one how similar two sample data sets or measured sequences are - also with respect to spatial or temporal shift. This is often used in many different fields of engineering and science from data mining over economics and statistical mechanics up to (quantum) physics. They are a really useful and simple tool - and there are techniques based on fast Fourier transform that allow pretty efficient calculations in realtime or on highly parallel hardware such as FPGAs (which is rather interesting for realtime object tracking).

Convolution

The correlation functions used in signal processing are usually based on the convolution. The convolution is defined as

[ \begin{aligned} f(t) \times g(t) &:= \int_{-\infty}^{\infty} f(\tau) g(t - \tau) d\tau \\ &= \int_{-\infty}^{\infty} f(t - \tau) g(\tau) d\tau \end{aligned} ]

The two definitions are equivalent due to commutativity of the convolution. Usually functions or data series are not supplied in the range from $-\infty$ to $\infty$ so one just truncates the integral to the range from $[0;t]$. But what does that really mean? This gets clear when one looks at the Fourier transformed functions $F(\omega)$ and $G(\omega)$:

[ \begin{aligned} F(\omega) &= \mathfrak{F}(f(t)) = \int_{-\infty}^{\infty} f(t) e^{-i \omega t} dt \\ G(\omega) &= \mathfrak{F}(g(t)) = \int_{-\infty}^{\infty} g(t) e^{-i \omega t} dt \end{aligned} ]

The inverse transformation is:

[ \begin{aligned} f(t) &= \mathfrak{F}^{-1}(F(\omega) = \int_{-\infty}^{\infty} F(\omega) e^{i \omega t} d\omega \\ g(t) &= \mathfrak{F}^{-1}(G(\omega) = \int_{-\infty}^{\infty} G(\omega) e^{i \omega t} d\omega \end{aligned} ]

Inserting this into the definition of the convolution:

[ \begin{aligned} f \times g &= \int_{-\infty}^{\infty} f(\tau) g(t - \tau) d\tau \\ &= \int_{-\infty}^{\infty} f(\tau) \left( \int_{-\infty}^{\infty} g(\omega) e^{i \omega (t-\tau)} d\omega \right) d\tau \end{aligned} ]

Then one can simply change the order of integration:

[ \begin{aligned} f \times g &= \int_{-\infty}^{\infty} f(\tau) \left( \int_{-\infty}^{\infty} g(\omega) e^{i \omega (t-\tau)} d\omega \right) d\tau \\ &= \int_{-\infty}^{\infty} g(\omega) \underbrace{\left( \int_{-\infty}^{\infty} f(\tau) e^{-i \omega \tau} d\tau \right)}_{f(\omega)} e^{i \omega t} d\omega \\ &= \int_{-\infty}^{\infty} f(\omega) g(\omega) e^{i \omega t} d\omega \\ &= \mathfrak{F}^{-1} ( f(\omega) g(\omega) ) \end{aligned} ]

As one can see the convolution in the time domain is simply a multiplication in frequency (Fourier transformed) domain. When one now recalls that sines form an orthogonal basis (which is also the reason Fourier transforms work) one can see that this works somewhat like the inner product of two discrete vectors forms a projection from one of the signal onto the other one in it’s signal basis. As with vectors in $\mathfrak{R}^n$ when two signals are orthogonal to each other the convolution will be zero. When they are identical it will reach it’s maximum possible value (the both amplitudes multiplied). This is what allows the correlation later on to describe the similarity between two signals. This does not work in all cases for the convolution due to the different direction the operation works along the time axis.

When now one looks back onto the definition of the convolution one sees there is a free parameter $t$:

[ f(t) \times g(t) := \int_{-\infty}^{\infty} f(\tau) g(t - \tau) d\tau \\ ]

As one can see this describes a shift of one of the functions along the time axis. That way it tells one how similar those functions are when one performs a given shift along time - which allows one to extract a time delay where functions are maximally similar or maximally different.

The second important thing to take with one from the journey using Fourier transforms is that the convolution can be defined as the inverse Fourier transformed product of two functions in Fourier transformed space.

Correlation

So what’s then the correlation? Basically the difference in definition is a 180 degree flip of the time axis of one of the functions. As one can see in the convolution definition one walks into future in one function and into the past in the other one. The correlation function inverses that direction and also applies a 180 degree flip in the complex plane (this is the sign change in the exponent in the Fourier transformation due to time reversal):

[ (f \times g)(t) = \int_{-\infty}^{\infty} \bar{f(\tau)} g(t + \tau) d\tau ]

Implementing correlation and convolution for discrete signals

One can directly implement correlation and convolution using summations and shifts for small amounts of data:

[ f(t) \times g(t) = \sum_{n = -\infty}^{\infty} f_n g_{t - n} \\ (f \times g)(t) = \sum_{n = -\infty}^{\infty} \bar{f_n} g_{n+t} ]

For symmetric functions correlation and convolution are the same - they differ for non periodic or asymmetric functions though. Typical applications are filter response calculations (convolution) or similarity measures, delay measurements, phase measurements, detection of patterns, etc. (correlation).

A problem arises for larger array sizes - the complexity basically grows with $O(N^2)$ On the other hand the complexity of a simple fast Fourier transform would be $O(N \log N)$ due to construction of the butterfly and the complexity of a simple multiplication in frequency space $O(N)$. Thus performing a forward transformation of both signals into frequency space $f(t) \to f(\omega) = \mathfrak{F}(f(t))$, complex conjugating one of the functions (or performing a time reversal before transformation in time domain), multiplying there and then performing a inverse transformation from frequency space into time domain is often way more efficient.

[ (f \times g)(t) = \mathfrak{F}^{-1} \left( \bar{\mathfrak{F}}(f(\omega)) \mathfrak{F}(g(\omega)) \right) ]

The last question that arises when working with discrete data is how to handle edges - one cannot sum from $-\infty$ to $\infty$. There are different approaches:

To suppress boundary effects one can either truncate ones functions, mirror or extend them or one can apply so called window functions. Often one uses Gaussian windows to exploit local correlations, sine or cosine windows (due to their simplicity like the rectangular window), the rectangular window (which leads to aliasing effects) - depending on the signal processing application windows can also already suppress side bands, prevent aliasing effects or provide filtering. There is an near uncountable number of different window functions for different applications.

Using correlation functions

Frequency measurement of a periodic function

Let’s take a look at a really simple example of how one can use correlation functions. For example to determine the frequency of a periodic function - one might measure a signal using an analog digital converter and ask oneself what the frequency of the given function is when one clearly can filter this function. One might get the idea of measuring the zero crossing at multiple points or locate maxima by looking for points that are larger than their local neighbors. Both of those methods are prone to noise and one would have to do massive statistics. On the other hand performing a correlation function is simple and uses information from all gathered points at the same time. Performing a correlation between a function and herself is called the autocorrelation.

Assuming one has a $10 Hz$ signal:

[ f(t) = \sin(\omega t) \\ \omega = 2 \pi f = 2 \pi 10 \\ \to f(t) = \sin(20 \pi t) ]

One can simply calculate the autocorrelation function:

[ (f \times f)(t) = \int_{-\infty}^{\infty} \bar{f}(\tau) g(t + \tau) d\tau ]

As one can see in the following plot the correlation function has periodic peaks. Towards positive and negative shifts it’s amplitude declines linearly. This happens in this case since this plot has been generated without taking care of boundary cases and padding the functions with zeros. This way the components of the overlapping functions get less and less:

Simple signal and its autocorrelation

When one looks at the time difference between two consecutive maxima or minima one always gets $0.1s$ (as one would expect for a frequency of $10 Hz$) up to some rounding error.

How can one solve the problem with the declining amplitude? In case the signal is periodic and not random - as in this case - one could extend one of the functions into the negative and positive region by simply repeating the function (in case it’s not periodic one would have to mirror it but that would lead to amplitude change due to phase change):

Simple repeated periodic function

The effect of the phase change in case of mirrored functions can be seen in the following graph:

Mirrored edge case handling

The phase jumps lead to errors in time delay and thus frequency measurements:

Errors in delay calculation due to phase jumps

In this case those errors of course exactly average out due to the perfect simulated signal and missing noise. As one can see in case one expects noise on the signal handling of edge cases will be pretty important - as well as the periodicity of the signal to some extend - in case one also needs the amplitude information.

Determining phase between two shifted periodic functions

The above property can also be used to determine the phase between two signals. This is especially simple in case the frequency of the two signals is known. One just has to calculate the correlation and check for the distance between at least two neighboring maxima or minima.

Let’s take the two shifted functions

[ \begin{aligned} f_1(t) &= \sin(\omega t) \\ \omega &= 2 \pi f = 2 \pi 10 \\ \to f_1(t) &= \sin(20 \pi t) \\ f_2(t) &= \sin(2 \pi f t + \phi) \\ &= \sin(2 \pi f(t + \frac{\phi}{2 \pi f})) \\ &= \sin(\omega (t + \underbrace{\frac{\phi}{\omega}}_{\delta \tau})) \end{aligned} ]

The phase shift $\phi = \frac{\pi}{3}$ is given in radians in this case and is related to the time delay $\delta \tau = \frac{\phi}{\omega}$ thus one can recover the phase $\phi = \delta \tau * \omega$. The time delay on the other hand can directly be recovered from the correlation function - but in this case it’s not the time difference between two different minima or maxima but the location with respect to our reference zero. All other maxima are then spaced with a time that corresponds to the period of the function (i.e. $0.1$ seconds for $10 Hz$)

Full correlation of two phase shifted signals

This already looks promising - but again the effect of the edges is significant. This time lets take a different approach to counter those effects. Let’s just take half of samples of the second signal and sample only over the fully overlapping region:

Valid correlation of truncated shifted signals

The time difference can then be extracted from the found maxima or minima. The first extremum that can be found in the graph above is at $\delta t \approx 0.017 s$. The spacing between two consecutive maxima or minima is $\delta t_{extrema} \approx 0.1s$ As before the $0.1s$ allow one to determine a frequency of

[ \begin{aligned} \delta t_{extrema} &\approx 0.1s \\ \frac{1}{\delta t_{extrema}} &= 10 Hz \\ \to f &= 10 Hz \end{aligned} ]

The first extremum is a minimum (thus we have a negative phase shift) that can be found at $\delta t \approx 0.0167s$. Now inserting into the above formula:

[ \begin{aligned} \delta t &= \frac{\phi}{\omega} \\ 0.0167s &= \frac{\phi}{2 \pi 10} \\ \to \phi &\approx 0.334 \pi \end{aligned} ]

This corresponds - up to a rounding / discretization error - to the phase shift during the simulation.

Determining time delay of reflected signals

So as one can see it’s pretty simple to calculate the time delay between two maxima of a periodic signal using correlation functions. This works as long as the time delay introduces a phase shift less than $2 \pi$. One can achieve this of course by simply using longer wavelengths, modulate the signal with longer wavelength structure on a periodic carrier or - and that’s practically done for noise radar - just use a totally random (or pseudo random) sequence to modulate the signal. Depending on applications one might also use a long polynomial as carrier or modulation which is for example done for GPS satellites. Practically noise radar is of special interest when one does not want to be discovered by traditional detection technologies (or wants to decouple Doppler and range detection) but requires details like matched receivers and impulse compression like for chirp radar for a practical realization.

The idea is simple - use a random signal that’s transmitted and correlate the received signal with the transmitted signal.

To simulate this some white noise has been generated. First lets take a look at the signal and the autocorrelation to get an idea why this might work pretty well:

Random noise signal and autocorrelation

As one can see there is one clearly detectable maximum with zero time shift and a huge suppression of the correlation for all other time delays. This is caused by the fact that our random transmitted noise function does not have any periodicity.

To demonstrate how this can be used to detect a reflecting target first lets:

Correlation function of a reflected random noise signal

As one can see there is exactly one significant peak in the plot with 0.5 seconds latency - which corresponds to our phase shift of the reflected signal. This means the signal traveled 0.25 seconds towards a target and back.

One can do the same thing with multiple reflections which is what’s also usually done for radar. This works since two phase shifted white noise signals are orthogonal to each other in the sense that $\int_{-\infty}^{\infty} w_1(t) w_2(t + \phi) dt \approx 0$ in a statistical sense.

Of course to realize a real radar system and extract radar signatures one needs to do many other things like applying a matched filter that takes account for many different Doppler shifts in parallel (for example one can discriminate a helicopter from a plane by the additional Doppler shifted signal seen from forward and backward moving rotors in addition to the main velocity dependent Doppler shifted signal and create signatures for a huge variety of flying objects to identify them accurately just by their radar reflection). This is of course way out of scope for a basic look at correlation functions.

This article is tagged: Physics, Basics, How stuff works, Math, Statistics, Data Mining, Measurement


Data protection policy

Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)

This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/

Valid HTML 4.01 Strict Powered by FreeBSD IPv6 support