PID control loop in a nutshell
22 Feb 2022 - tsp
Last update 22 Feb 2022
10 mins
The following article provides just a short overview of what a PID loop is
and how to implement one on discrete devices with manual tuning.
What is a PID control loop?
A proportional-integral-derivative (PID) loop is a control loop mechanism that
consists of three parallel components - namely the following parts:
- A proportional (P) term
- An integral (I) term
- A differential term
Note that the control loop might also contain a process, measurement and optional
inverse process model. The basic idea is that one supplies a target $\vec{x}(t)$
value that a system should ideally evolve into or stay near at. This might be a
simple setting like a temperature or angular position or a more complex state
vector like drone or vehicle velocity, a magnetic field, etc.

The value $\vec{e}(t)$ denotes the error deviation between the requested
target size $\vec{x}(t)$ and the current measures values - or the state estimated
by the measurement ($\vec{w}(t)$). All components of a PID loop directly depend on the error:
- The proportional part is - as the name implied - proportional to the error
value: $K_p * \vec{e}(t)$
- The differential component $K_d * \partial_t \vec{e}(t)$ depends on the rate
of change of the error signal.
- The integral component $K_i * \int \vec{e}(t) d\tau$ depends on the sum of
previous error values - either over the entire lifetime or better a given time
window.
The sum of these components is then passed to the system - thatās then measured
again at the next time step. The measurement $\vec{q}(t)$ is optionally passed
through an inverse system model and leads to the actual state $\vec{w}(t)$.
As one can see this can be described by the following set of equations:
[
e(t) = x(t) - w(t)
]
[
y(t) = K_p * e(t) + K_d * \frac{\partial e(t)}{\partial t} + K_i * \int_{t-\delta t_I}^{t} e(\tau) d\tau
]
In discretized form the differential is just a subtraction (and optionally a
division - but one can absorb that into the factor $K_d$) as well as a simple
sum (again the multiplication with the time span of a proper integral can be absorbed
into $K_i$ again)
[
y(t_i) = K_p * e(t_i) + K_d * \frac{e(t - \delta t_D) - e(t)}{\delta t_D} + K_i * \sum_{\tau = t - \delta t_I}^{t} e(\tau)
]
When one wants to use a PID regulator for temperature control one can assume that
the delivered power into heating elements is usually linear dependent with the
PWM frequency and thus the output of the regulator is proportional to the missing
power that should be pushed into the heated volume - since the temperature again
is mostly linearly dependent on the electrical power. Of course the system also
encounters losses (conduction, radiation, etc.) but that can to some extend be
ignored and modeled with the integral term.
So what do the terms really do?
- $y(t)$ is the output of the regulator - in case of temperature regulation this
is the duty cycle and thus should be somehow clamped to the desired range. This
is done by simply limiting the value and of course also by tuning the loop
coefficients. This is most of the time also called control value.
- $e(t)$ is the deviation between the target value / set value $x(t)$ and the
current measured value $w(t)$ (process value).
- $\frac{\partial e(t)}{\partial t}$ is the derivative thatās usually measured in
discrete fashion and approximated as $\frac{e(t - \delta t_d) - e(t)}{\delta t_d}$
- $\int_{t - \delta t_I}^{t} e(\tau) d \tau$ is the sum of all previous error
deviations inside a sliding time window of length $\delta t_I$
- $\delta t_{D}$ is the time window used for discrete differentiation. This is
the basic time step size.
- $\delta t_{I}$ is the integration time window in multiples of $\delta t_{diff}$.
The whole control loop is characterized by the three coefficients - here Iāve shown
them independent but sometimes theyāre also specified in a dependent way so one can
really call $K_p$ the gain.
- $K_p$ is the proportional factor. This drives the system into the target direction
with more force the farther itās away from the target value. It usually pushes into
the target direction. Sometimes called gain.
- $K_d$ is an differential term. If you try to simply build a proportional regulator
you will see that this starts to oscillate - first it overshoots, then it undershoots
again. The differential term is proportional to the change of the error deviation
and thus dampens oscillations and slows down the whole process. This usually
pushes away from the target direction to dampen the proportional factor. Sometimes
itās called preact.
- $K_i$ is the integral term. This is a term that sums up systematic error such
as constant averaged losses or differences between measurement system and
reality. This term also usually points against the direction of the proportional
term to compensate for any systematic differences. Limiting the integration
window $\delta t$ is required to be able to react to dynamic changes. Sometimes
called reset.
The tuning of the PID parameters $K_p$, $K_d$ and $K_i$ is of course crucial and
the hardest task. Note that there is no correct answer on how to determine
these gains, there is a variety of procedures. This is because the targets of
being responsive and minimizing overshoot are in contradiction to each other.
The most simple and basic tuning procedure is the following:
- Take note if your system is changing fast (motor speed, attack angle of a
plane, etc.) or slow (temperature, steering a large ship (youād use something
different than a PID regulator, wonāt you?), etc.). Fast systems usually require
small gain $K_p$ and high reset $K_i$, slow systems the other way round high gain $K_p$
and low reset $K_i$
- Then first set $K_i = 0$ and $K_d = 0$, so build a simple proportional regulator
first.
- Increase $K_p$ by doubling till the system starts to oscillate. Then half $K_p$
- Start with a small integral $K_i$ term. Double till it oscillates, then half again.
Up until now this is a process often seen in industry - one then often just stops
with a PI regulator. In case one cannot tolerate overshoot one might then also
tune the differential part:
- Then apply the damping term $K_d$ to suppress any remaining oscillations and
reduce overshoot again.
Note that you cannot use this for stuff where you cannot do experimentation - and
in case you have multiple processes depending on each other you really have to do
Eigenmode analysis to prevent loops fighting each other (this requires at least
some kind of working process model and usually is done using numerical analysis).
A simulated example (temperature controller)
Why use a such complicated system? In the following simulations Iāve simulated
a simple 100W heater in a small volume, a control signal from 0 to 100 percent
duty cycle and included some loss (always 2 percent of the energy contained in
the system will be lost due to bad thermal isolation). Iāve also superimposed
some external random fluctuations that will come in handy later on when looking
at diverging scenarios (if theyāre going to happen). The start temperature is
0 Celsius, the target temperature 300 Celsius. Iāve modeled the slow increase
in temperature just by modeling the sensor on one end of the volume and the
heater on the other - which is basically a time delay.
So first having a way too fast proportional term for the given time delay and
no integral and differential term one nicely sees the oscillations:

Reducing the gain by a huge factor one already sees the damping factor. Why is
this when I previously said this is what the differential term does? Since Iāve
modeled the energy loss of the temperature in an proportional way (not physically
correct) Iāve got a differential damping term by the simulated system.

Further reducing the gain now shows massive overshoot, massive undershoot and
then a dominant fluctuation by the simulated noise:

Reducing the gain too far delivers not enough power to reach the target temperature
any more:

Thus I choose a gain that overshoots the target and then start with the integral
part. Choosing a way too large integral term also leads to massive overshoot
and oscillations:

Reducing the gain on the integral term now allows one to find already configurations
that converge - the oscillations are still damped by the differential in the
model and not by the regulator though:

Playing around with integral and proportional terms allows already to find
a sufficient nice PI regulator for the given problem:

In case the system itself does not contain any terms that dampen oscillations
it might be a good idea to introduce a damping term oneself - this can also be
used to reduce the overshoot by rate-limiting.
Implementation
The implementation of a PID controller basically is pretty simple. Each loop
iteration requires a measurement value, a target value, the error value
value and a finite history of error values for the integral term in addition
to the three constants. A simple and naive implementation might look like the
following (this is the code that has been used to generate the plots above).
The integral part is realized using a ringbuffer. It might be interesting to not
initialize the ringbuffer using 0 as constant but the first measurement
value so the integral term doesnāt produce garbage in the setup phase.
#define PID_INTEGRAL_SIZE 32
struct pidChannel {
double dTarget;
double dMeasuredValue;
double dErrorHistory[PID_INTEGRAL_SIZE]
double Kp;
double Ki;
double Kd;
unsigned long int dwErrorHistoryRB_Current;
double tDiff;
}
void pidInit(
struct pidChannel* lpChannel,
double Kp,
double Ki,
double Kd,
double tDiff
) {
unsigned long int i;
if(lpChannel == NULL) { return; }
lpChannel->dTarget = 0;
lpChannel->dMeasuredValue = 0;
for(i = 0; i < PID_INTEGRAL_SIZE; i=i+1) {
lpChannel->dErrorHistory[i] = 0;
}
lpChannel->Kp = Kp;
lpChannel->Ki = Ki;
lpChannel->Kd = Kd;
lpChannel->tDiff = tDiff;
lpChannel->dwErrorHistoryRB_Current = 0;
}
void pidSetMeasurement(
struct pidChannel* lpChannel,
double dNewMeasurement
) {
lpChannel->dMeasuredValue = dNewMeasurement;
}
void pidSetTarget(
struct pidChannel* lpChannel,
double dTarget
) {
lpChannel->dTarget = dTarget;
}
double pidUpdateOutput(struct pidChannel* lpChannel) {
unsigned long int i;
double e = lpChannel->dTarget - lpChannel->dMeasuredValue;
lpChannel->dErrorHistory[lpChannel->dwErrorHistoryRB_Current] = e;
lpChannel->dwErrorHistoryRB_Current = (lpChannel->dwErrorHistoryRB_Current + 1) % PID_INTEGRAL_SIZE;
double eLast = lpChannel->dErrorHistory[(lpChannel->dwErrorHistoryRB_Current + PID_INTEGRAL_SIZE - 1) % PID_INTEGRAL_SIZE];
double yProp = lpChannel->Kp * e;
double yDiff = lpChannel->Kd * (eLast - e) / lpChannel->tDiff;
double yInt = 0;
for(i = 0; i < PID_INTEGRAL_SIZE; i=i+1) {
yInt = yInt + lpChannel->dErrorHistory[i];
}
yInt = yInt * lpChannel->tDiff * lpChannel->Ki;
return yProp + yDiff + yInt;
}
This article is tagged: