"Negative feedback" may be considered as bad for personal self-esteem and social development, but it is the key to effective performance in our world of automated, precise control systems. The concept is simple: observe a critical parameter, assess the difference between the desired outcome and the value of the parameter, and adjust a correcting force's intensity and direction to reduce the difference. In other words: If the temperature of the pot is too high, turn the heat down or even off; if it is too low, turn the heat up.

The critical parameters in closed-loop control are the *setpoint*, which is the desired value of the process variable (temperature, pressure, volume, flow rate); the *observed or measured value* of the *process variable*; and the *control output*, such as an increase in the applied heat, **Figure 1**, via an actuator. Negative feedback error, derived from the relationship between the setpoint and observed value, is used to "close the loop" to manage the control output.

The simplest way to implement this feedback-based control is with an on/off strategy. In this approach, the valve or switch which controls the applied force (such as flow or heat) needs only to have on and off modes. It’s used in many applications, such as home ovens, refrigerators, or tank fluid level. When the measured variable exceeds an upper threshold, the heat is turned off; when it drops below a lower threshold, it is turned on. In many designs, the threshold is implemented by simple hysteresis around the setpoint. On/off control is sometimes called “bang-bang” control due to the noise a valve often makes as it slams open and shut.

On/off control is low cost and adequate for some applications, but suffers from large swings and overshoots in observed variable; a home oven can easily have an error band of ±10 or even ±20 ⁰F around the desired setpoint, **Figure 2**. (The number depends on factors such as thermal mass, heat source energy, oven design, and air flow.)

To reduce the error size, the upper and lower thresholds are set closer to the desired setpoint (or the hysteresis reduced), with the result that the control output has to be cycled frequently. This improves performance somewhat but has practical downsides, as it is often noisy and can lead to early failure of the control element; oscillation is also a common problem. Furthermore, on/off cannot provide smooth control of reaching and holding the setpoint when a normal "upset" event occurs, such as when more water is poured into a pot which still must be maintained at the setpoint, or when the user changes the setpoint.

**Go proportional to minimize errors**

An intuitive, more sophisticated and potentially precise approach is proportional control, where the control element is not on/off, but instead can be controlled to be anywhere from 0% to 100% of open or on state. This control output is set to be proportional to the error between the desired setpoint and the observed variable. If the error is large, more is allowed; if the error is mall, less is allowed. Basic proportional control is a major first step towards precise maintenance of the observed variable close to the desired setpoint with minimum error, even when upsets occur.

It may seem that proportional control is sufficient for improved control. While basic proportional control is effective, it falls short in terms of error buildup and controlled reaction to upsets, and is also prone to oscillation around the setpoint. To enhance the performance of a proportional controller, factors called the integral and derivative terms are added to the closed-loop control approach. The result is the complete, three-term PID algorithm**, Figure 3**.

The integral term (also called "anti-reset windup" in industrial control literature) is based on the accumulated error over some past time period, and accounts for unavoidable error buildup due to imperfections in the entire system or steady disturbances, such as a leak in a tank. The derivative term has a very different role: it is related to the rate of change of error.

**PID implementation and tuning**

Like the works of Shakespeare and James Joyce, control theory and PID performance have been the subject of countless papers, tutorials, books, and courses; these range from qualitative, intuitive explanations to many which use advanced mathematics. The formal basic PID equation shows that the control output K_{c} is determined by:

K_{c} = (K_{1} × P) + (K_{2} × I) + (K_{3} × D)

where K_{1}, K_{2}, and K_{3} are the independent coefficients of the P, I, and D terms, respectively. In intuitive terms, the proportional factor represents the present value of the error, the integral term is related to the past values of the error, and the derivative term corresponds to predicted future values of the error.

Many closed-loop control systems do not use all three terms. PI-only controllers are fairly common, since the integral term is useful in helping the closed-loop control system reach its target value; in contrast, the derivative term may not be needed and may actually be counterproductive since it is sensitive to unavoidable system noise.

There is an almost limitless variety of closed-loop control and PID approaches, based on both general, ideal situations and application-specific reality. Much of the analysis focuses on minimizing error and achieving optimal performance under various operating conditions, an effort which begins with defining "optimal." For some applications, it means minimizing total error; for others, it is minimizing overshoot or maximum error; yet for others, it is minimizing the time to recover from an upset.

"Tuning" the PID coefficients is the way to optimize performance with respect to user priority, and much of the PID literature is devoted to tuning tactics. Until fairly recently, tuning was done manually, based on operator experience and understanding of the plant or system dynamics. A properly tuned PID controller can maintain the system at setpoint and track it, even as the setpoint changes or there are external system upsets to the plant, **Figure 4.**

However, the theoretical analysis improved greatly as models of the operating plant became more detailed and accurate. Now, nearly all PID tuning is done automatically using sophisticated algorithms programmed into the PID controller, **Figure 5**. Some of these enhanced models even take into account the reality that the plant is often nonlinear, so the PID coefficients may need to change dynamically, depending on where the process is within its overall operating regime. They also can implement a complex sequence of ramp/soak cycles with accuracy and consistency, **Figure 6**.

The value of closed-loop, negative-feedback PID control has been formally recognized for over a century. Basic PID control and use began around 1900, and was achieved using springs, levers, and a fixed mass, all powered by compressed air; in fact, pneumatic PID controllers were in widespread use until the middle of the 20^{th} century.

These were replaced by vacuum tube-based electronics, then by analog solid-state electronics, of course, and now have been largely made obsolete by software-driven, microcontroller-based units. These digital versions provide not only basic PID control but also include features such as complex tuning algorithms, application-specific variations of basic PID control, flexibility in set-up and operation, and much more.

Interestingly, there is a PID controller which pre-dates both the formal analysis as well as any mechanical or electronic controllers. Reputable studies have shown that human beings actually (and unknowingly) implement a basic PID strategy when doing standard tasks such as driving a car or steering a boat, though obviously with varying degrees of perfection. This is further fascinating evidence that a PID-like control approach has great value in practice, even if user is unaware of the academic rationale behind it.

**References**

There are thousands of PID and control references available online, from many perspectives. Two good ones are: