The free energy principle is a model for consciousness which states that the purpose is to minimize surprise or uncertainty and that it can be described as an interplay between a set of internal states and external states mediated via sensation and action.
I’m not the first to suggest that our society or institutions can be described as having some form of collective intelligence, and with the free energy principle, there’s a direct model showing how the scientific observations we make can be integrated together into a set of shared states that define our knowledge.
It’s useful to make a clear distinction between sentience and consciousness, both being forms of general intelligence. When I make this distinction, I consider sentience to be the intelligence we clearly show with the majority of animals that involves flexible adaptive behavior toward a persistent goal. One way to model the interplay between such an intelligence and its environment is through the free energy principle. On the other hand, when I refer to “consciousness”, I consider it to require a self-model which allows you to not only generate adaptive behavior, but also to model that behavior, modify it, and have a process for modeling whether a certain behavior is appropriate for a context.
My guess is that this self-awareness arises when you have your sentience “networked” to other intelligences in a distributed system which has the ability to arrive at shared state through some consensus algorithm. The dynamics which on this higher level create a new distributed sentience also provide a form of collective representation, including the possibility of a collective representation of one’s self. This provides a sort of mirror allowing self-representation and eventually representing ones own behaviors and modifying them. I believe that such a networked arrangement would also tend to propagate itself, replicating the arrangement at many levels if it does manage to reduce surprise.
A computational view of emergence, as I described here for example, provides a clear path to modeling such a system. Modelling them with the free energy principle, we already know of consensus algorithms which could allow multiple agents driven to minimize surprise to collectively form a new agent that similarly is driven to minimize surprise. There exist Turing machines which minimize surprise, and which further can interact, perhaps via something like a process calculus, in such a way as to also minimize surprise collectively. An important side note is that the collective surprise may be minimized even as the individual surprise for each agent increases, at least temporarily.
In the Turing machine version, some mechanisms which are probabilistic in the Free Energy Principle model are reduced to probabilities 1 or 0. If we’re trying to fully embed the concept of a Turing machine as a special case within the model of the free energy principle, this correspondence might be appropriate:
Not every Turing machine minimizes surprise, however some do, which means that any Turing complete machine *could* arrive upon behavior that does minimize surprise. However such a computation-based view is not the only one possible. We can see that Turing machines are a special case of the model outlined by the Free Energy Principle.
If other cases which are not able to be modeled as Turing machines were networked together using some set of processes that allow them to converge to stable states which collectively minimize surprise, they might be able to arrive at states that a Turing machine never could.
What might this look like?
Since any signal can be modeled as the sum of frequencies via the Fourier transform, we can imagine a system which has sensors that are tuned to various frequencies. Certain combinations of frequencies could trigger mechanisms that cause the system to anticipate other future signals. Theoretically the system might be able to harvest energy from such oscillations. They could potentially use that energy to produce their own tones. Those tones might adjust their environment to allow certain signals to come through more clearly, allowing them to increase the efficiency with which they harvest that energy. Thus a sentience might be able to exist in such a system.
Now let’s imagine several agents of such a system were networked. Since they can produce their own tones to change their own environment, they should be able to do the same to change the environment of other of these agents as well in an attempt to cooperate or compete. Since they interact via tones, they might have an acoustically-based system which might use consonance and dissonance to push and pull each other into a steady state. And the collective processes doing that might replicate themselves at various levels. We can similarly imagine such agents working with in the electromagnetic spectrum rather than acoustic.
If this were the case, it would be convenient for these agents to take advantage of atomic resonance properties to manipulate the electromagnetic spectrum by producing various emissions of light, arranging electric fields in space, and developing specific physical mechanisms tuned to various frequencies which trigger a cascade of responses.
This is essentially a description of life as we know it. Whether it is driven by such agents is an interesting question, and central to it is the question of what algorithms such systems would use in order to arrive at their shared state which provides a higher level of sentience.
Are those algorithms computeable? We don’t know. But at we can imagine an account that describes their behavior either way. So we don’t need to fall back to a computational view of the universe simply because of a lack of being able to imagine or describe another way.