How to keep humans in the loop
With AI quickly taking over, some people have expressed anxiety about how we can compete. The main anxiety seems to revolve around the question of whether we can maintain control of our own destiny.
In my opinion, the answer is yes, but the same basic problem was already in existence before ChatGPT. The problem is lack of information flow upward in our society’s decision making tree. In fact AI is almost an improvement over what we had. At least it’s been trained on the data we’ve all produced and tried to extract meaningful concepts. Our political system on the other hand can barely handle a few million bits of information regarding which president or representatives to select. From there we have little to no say.
I don’t think the problem is even really will, it’s just lack of an accessible method of doing so. It’s a lot of work to integrate information from many disparate sources and contexts and try to integrate them into meaningful decisions. It’s an information systems problem, and it needs an information systems solution.
1.1 A model
From what I’ve gathered, human institutions can be very coarsely modeled as being composed of 2 kinds of units: human and automaton. Sometimes the humans are the automatons, and from now on, sometimes automatons will be the humans, so there’s probably a better word. But the idea is that human institutions run on code. When the instructions of that code are known and explicit to the system, it is automatons running the code. When it’s implicit, it’s humans-ish.
For example, in a large corporation, a manager might have to enforce a dress code. This dress code might include a shirt being tucked in. Does the manager personally care about the shirt being tucked in? No. But they care about their job, and if they won’t execute the code, another automaton who can more properly maintain the integrity of the system will replace them. Common sense is actively suppressed in these situations. The point is to execute the code with as much fidelity as possible.
In that same corporation, they may be trying to come up with new locations. They may decide to form a committee to come up with a list of locations. They might create the committee, and to illustrate through exaggeration, they might give the committee 1 instruction “find the next location.” At the end, there’s an expected return type, but in the middle, there are no explicit instructions. The committee comes back with the next location, and presents the answer in a way that allows the automatons to continue executing, the members of the committee likely participating as automatons themselves in that process. This has parallels to the IO monad, or an IO effect.
At the end of the day, people have worked as both, then they go home to their lives where the parts they enjoy are mostly human, with a few exceptions.
1.2 That model’s dynamics
A system can be ruled in these 2 modes as well. Generally a more democratic system is more heavily reliant on the automaton mode for ultimate decision-making. We rely on policies and procedures to ensure the integrity of signals coming from the public. A pharaoh issuing edicts–based on their intuition, or more likely based on feedback from their region-specific or industry-specific counsels–uses a human mode of decision making for more things.
Signals being sent directly from the public into the autonomous system that churns through the political code allow the system to respond quickly to changing information from the ground. Assuming fair votes, that information is also reliable. The more human mode of a dictator can probably process information more quickly, but the signals are sent through noisy channels, both from leaders with agendas, and filtered through the dictator’s own biased unreliable brain.
I think integrating signals directly from the public into the decision making process is an evolutionary step forward because it allows us to make decisions jointly on some level, which allows us to act in symbiosis. However it ties us into a system of business logic. In my view, all of human existence is a struggle to create as much room for the human side in our individual lives, while taking advantage of the autonomous societal processes that can allow us to achieve more than we would individually. An individual didn’t create the aqueducts; an autonomous system of business logic using humans as fungible instruction-executors did.
However these golems (i.e. you and me when we’re getting paid enough), are also destroying the world. They’ve run off the rails, they’re out of control, killing the coral reefs, blasting off the tops of mountains, crapping out a bunch of plastic into the ocean where it’s just floating there in a garbage patch. Why is this? None of us want this personally.
It’s because our shared decision making systems have not scaled with the population and the increased activity… the increased power of the economic engine. We’re hurtling around at much higher speeds without a big enough rudder.
In order to fix this, we need to design systems that can make integrate signals more efficiently into collective decisions. Think about the amount of information put on social networks. All the opinions and outrage. What’s it about? It’s signal reverberating in a container with no outlet. Like the thoughts of a person who doesn’t talk to anyone, the signals gets distorted as this it that frequency is amplified and others are dampened.
Harness that signal and put it to use in meaningful collective decision making. How?
A proposed plan
Create a platform where people can propose and vote up ideas *and their implementations*
We can start by trying to use such a platform at the most local scale. For instance, the platform could be used by roommates to allow them to give each other tasks and make group decisions that affect their household. It could similarly be used by families, groups of friends traveling. Freelancers, and ad-hoc partnerships among freelancers to divvy up work, and divvy up the revenue. Anywhere there is a commons, there could be a place on the platform to allow members to manage it. This can allow us to focus on the human aspects, while delegating the transactional aspects to automated systems we agree to follow (while maintaining the ability to adjust what we agreed to by propose changes).
Part of the benefit of moving toward a cooperative framework like this would also be a consolidation of redundant code. How many different ways has “LoginService” been implemented? Imagine if that effort was instead united toward a common goal, rather than directed against each other in competition. Conway’s law suggests that our code reflects the organization that produces it. Part of what made open source software so successful is its ability to invite participation. Rather than re-invent a tool, you can contribute to it, and feel good that your services are not being taken advantage of. But this has its limits. Ultimately both a system of payment and collaborative decision-making for open source software needs to be developed, and those two things are mutually dependent on each other.
In a way, smart contracts could fulfill the purpose I’m describing. They are a proof of concept of the idea that procedures can be represented in a way that people can vote on. In their current form they cannot do so practically for a variety of reasons which I can get into in a follow-up, though there is certainly ground they have tread that would inform such a platform. However, existing cryptos themselves would benefit from a platform that fulfills these requirements, because a platform that can represent and execute both human and automaton tasks would have the capacity to be extended to produce compiled contracts in the various bytecodes of different cryptos as well as submit and interact with them. So it could essentially serve as a front end for other cryptos.