A model for computing goals as signals
Imagine the set of instructions required to keep a set of particles moving with Brownian motion along a single dimension restricted to a single half of the plane for a certain amount of time with a certain probability.
We need to consider (a) a desired boundary / goal, represented as a probability distribution (b) a detection of the position of the particle relative to the boundary by a ‘detector’ (c) a potential to redirect the particle in question provided by an ‘actuator’
Let’s define what a detection is for our purposes. Ultimately it’s the collection of information that is capable of resulting in a differential action. For instance, in our setup, it must be capable of distinguishing between a particle in a position within vs outside some boundary with some probability p, and as a result, the device must be able to choose between at least 2 courses of action. In our case (1) do nothing (2) impart force.
The detector sends a signal to the mechanism that could impart force. For the most trivial version of this, we can assume perfect transmission and we can imagine a case where a particle imparts energy to the detector, and that energy is transmitted directly to the actuator, which relies on it for actuation. This means the actuator consumes the signal. Such a mechanism involves no external energy. Ultimately this is simply an energy transfer. But we can now consider a few inputs and their relationship to some outputs. Inputs: (a) the accuracy of the detector… i.e. the probability that the particle truly is in the goal area, given that it is registered as such as well as the probability that the particle is truly outside the goal area, given that it is registered as such (b) the effectiveness of the actuator. In other words the probability that it successfully imparts a quantum of energy to the particle to prevent it continuing outside the goal area when it is appropriate according to the detected position, as well as the accidental trigger rate where it actuates without any input suggesting that it do so. Outputs: the probability of maintaining the goal state up to time t, the number of detections, the number of actuations, the expected time until failure.
We can make this more complicated though. The detection usually requires some amplification of the energy associated with a given state. In order to detect the particle’s location, we need to allow it to do work on our measurement device, then we need to use that work to release a cascade of potentials which increase in value. Each of these has some probability of a false alarm or a missed detection. We can also consider situations where further detections are made upon the signal either as originally detected, or as transmitted.
For a mechanism that can detect more than one signal, or to amplify the signal such that it can be transmitted with higher fidelity or to more actuators, once the amplification has reached a certain point, it is put through a control flow mechanism, which chooses a course of action based on the measured signal and begins an actuation cascade. In these cases each separate actuation cascade should be able to be treated separately and recombined however the original signal must be replicated or amplified in order to serve each branch. I suspect the energy must be multiplied N times to preserve the same probabilities. This is where an imperfect transmission might come in. We could also consider situations where the actuator has various levels of force, and/or various degrees of freedom it’s acting on. Presumably these should similarly be able to be combined. Each step of the actuation cascade can then have its output coupled to either another device, or an output. There should also be a “do nothing” actuator as well as a “nothing detected” detector. Actuation that occurs based off the “nothing detected” detector is simply overhead power. Null actuation is simply excess absorption of the signal with no effect. The probabilities of (a) actuation with no signal (b) signal with no real particle can be summed up as this sort of inefficiency. So the relevant probabilities to the goal are probability of failed detection and probability of failed activation. Similarly there could also be failed transmission between detector and actuator. We can combine these into simple probability of failure of the mechanism.
So we have sets of detectors which send signals over some channel to sets of actuators. By considering detectors and actuators in series and parallel and potentially in a feedback setup, each with a probability of failure, we should be able to calculate an over-arching output energy + probability of failure within time T for a given input energy.
Let’s assume our system has some minimum quantum of energy. The minimum amount of energy we can expend on the measurement is equal to the minimum force we can impart, which is equal to q. We cannot measure the input without at least partially consuming it. If we’re measuring a single quantum, it will be consumed. If we emit a quantum corresponding to that in order to preserve the outgoing signal, we’re doing actuation in response to the signal. The result is that we obtain information about the signal, but at best, we increase the noise along other degrees of freedom. However practically since there are errors in detection and actuation, we are also increasing the noise of the channel.
So this should be a taste of how this basic model can generalize.
Going back to our idealized setup, we should be able to calculate the probability of failure as a function of the probabilities of failed detection and failed actuation. In our perfect model, if the particle we’re dealing with is quantum-sized then the model just reduces to reflection. The particle is absorbed at the boundary by the detector and re-emitted by the actuator. No external energy is required.
However in a case where the particle is larger, and we do not consume it through detection, if our detector is triggered, then our actuator must be able to impart an amount of energy corresponding to the mass of the particle (wrt quantum size) to contain the particle. This is to stop the particle. It will then have an equal probability of moving away from the detector, further toward the goal space, or back toward the detector. If we impart 2x the force (i.e. provide 2x the energy), then we reduce the probability of immediately coming back toward the detector by .5. The 2 quanta increase probability of detection, but also decrease the probability of successful actuation. So ultimately as far as probability of failure, the mass cancels out. However by providing 2x the detectors or 2x the actuators, we should be able to increase the probability of success.
This means that if we were able to 100% trust our detectors and actuators, there would some energy we can impart to the particle to guarantee it will not exit the goal space within time T.
If we add redundant detectors, we can increase the probability of detection to 1 – (1 – p)^2. Similarly if we add redundant actuators. This would require proportional increased energy, and for many mechanisms they would be equivalent. However these both would approach 1 only asymptotically so would never be capable of guaranteeing success, with the possible exception of simple collisions and reflections of fundamental particles.
The end result is a formula:
For N*x the energy, we improve our probability of success for time t by a maximum of 1 – (1 – p)^N. This means for double the power, we have 1 – (1-p)^2 better chance of success.
Comparing this to the channel capacity we can see that the logarithmic relationship remains between power and probability