>>3473611

I'm going to introduce you to a little bit of systems control theory.

You may have heard of it already; there exists a feedback scheme called Proportional-Integral-Derivative control. It starts from the assumption that you have some event that you can measure, and a set point where you wish it to be regardless of what may disturb it. The scheme starts by observing what the state of the system is and subtracting that from your set value to come up with what's called the error value - this is the measure of how far off you are from where you want to be.

Now, starting with the Proportional part, the idea is to multiply the error value with a constant P such that if the error value is negative, meaning "you've gone too far", the resulting control value is negative meaning "go back". Increasing the constant P makes the system respond more vigorously, such as "Women are earning half as much as men, so from now on every new woman hired must recieve 2x salary.", and that will over time increase the average salary of hired women to the point where they reach parity with men. Now you might already notice the problem - when the error value reaches zero and new hires are getting equal pay, as the lower paid women retire the average salary overtakes that of men and your attempt at control overshoots. If the value of P is too high, you get oscillations that get worse and worse, but, if the value of P is too low, you never reach parity. If it's just right, then you may overshoot a little, and then undershoot a little, skeeting around some but eventually you'll find a happy middle ground where the system stays where you want it to stay regardless of the environmental pressure upon it.

Now, the trouble is, if the changes to the system are very slow, then Proportional control just goes bang-bang from one extreme to another. You have to use very low values of P to avoid oscillation and you don't get anything done because your efforts will be so miniscule it's like trying to steer an ocean liner by farting into the wind. That's where the Integral part comes in: it takes into account how long the disrepancy or error has persisted and increases the effort proportionally to that, which has the effect of changing your factor of "amplification". As the system remembers its past it also increases effort if the desired change doesn't seem to be happening. Choosing P and I correctly for your system gets you there and keeps you there quite reliably. However, the Integral term is always backwards looking - it's living in the past and takes considerable time to realize what's happened, and it bears grudges where old disturbances that no longer exist are still driving the system away from its goal.

Then finally there's the Derivative term - it measures the *rate of change* of your error value and *decreases* effort if it's changing too fast, so it stabilizes the system by observing when things might be getting out of hand. It's the conservative term. In a sense, it's like saying *"we shouldn't try to hit the target right away because we can't just slam the gas and steer hard - that would break stuff - so let's start off carefully and aim for something lower"*. It deals with both changes in the set value, and changes in the system state so that when some calamity hits your system, or you change opinion about where it should be heading, the result isn't instant panic and hysteria.

Now, each of these values has to be chosen by observing how the control system behaves - they have to be *tuned*, and there is no system to tune the system, except by applying the same concept back on itself recursively, so you're tuning your tuning your tuning your tuning... and you can't just settle down to some values because your overall system and its environment is changing as well, so while some values may be suitable today, another values are required tomorrow.

And all that is still forgetting that the values you measure and the error you compute is subject to noise and uncertainty - ignorance of reality. Another problem is that your control system disregards its own future: when your goals move around, the system follows behind as if dragged by a rubber band so you have to think one step ahead. You need a system for setting up the set points by having a model of reality that predicts what your system will do if you give it a particular goal, assuming you know exactly what your system is and what the future will bring, including your own future opinions about what the system should be doing based on your experience of what it actually did.

And how do you come up with that? The dirty secret of control systems engineering is that you can't - your system will never work exactly as desired, and there will always be cases that you couldn't predict. You can spend your life coming up with all the troubles you want to avoid and still there will be more.

Fortunately, for machine systems that work in limited tasks, you can specify and outline their roles carefully and whatever falls out that scope is ignored. For social engineering however, everything is up for grabs - because the people too evolve with the system and nothing stays constant for too long, and eventually someone figures out how your system to tune your system can be influenced and uses the knowledge to de-tune it for their own advantage, necessitating you to come up with a system to deal with system-breakers, and system breaker breaker breaker breakers...