Skip to main content

Artificial intelligent life requires chaos

I suspect that artificial intelligent life is not possible if organisms do not create a microcosm of the inherent chaos in the universe within themselves. The same way that numbers can be factored into primes, probability distributions can be factored into primitive probability distributions. Chaos seems to be the most effective entity in factoring chaos. Order can only factor as far as the dynamic range of its structure. Life in an abstract seems to be a game where each organism leverages a set of internal chaotic processes to deal with the external chaos of the environment. If the internal chaos of an organism is too high, the organism dissolves or disintegrates. This is especially true from the perspective of information, as the organism will struggle to have a grasp on the causal nature of the universe, but it also applies to the cellular units of the organism. When feedback loops are broken, cells will destabilize. If the internal chaos of an organism is too low, then it might fail to adapt to the external environment. Rigidness is an inability to explore new paths.

If this hypothesis is true, then learning rules should not be thought of from the causal perspective. Instead, the learned rule is an outcome that emerges as information is passed through layers and layers of chaos filters. Structure is not something that can ever be hardcoded, only the entity that generates the structure can be hardcoded. And the entity that generates the structure must not only deal with the inherent chaos of the universe, but leverage it to its advantage. And this leverage has been found by also factoring chaos. How does a single cell multiply to become an adult organism? Growing in an abstract is  doing direct combat with the physical environment. Maybe not always in the sense of securing resources, but always in the sense of dealing with the laws of nature. Any organism that grows and does not deal with the laws of nature will inevitably become unfit.

From this perspective, it appears that chaos is two sided. One side is the loop (adaptation), and the other is the branch (growth). For a growth to occur, there must be something to attach to. What it attaches to is a loop, an originally chaotic process that has become stable through a combination of mathematical and statistical laws (law of large numbers, central limit theorem, etc) and the laws of nature (gravity, force, etc). Without a loop, there is no computational foundation that allows a coherent exploration of chaos. In other words, it cannot create an internal microcosm of the chaos of the universe. The organism is itself just a part of the first layer of the universe's chaos filter. And an organism that creates an internal microcosm is a part of the second layer of the universe's chaos filter. And an organism that creates an internal microcosm that creates an internal microcosm is a part of the third layer of the universe's chaos filter. A static loop also prevents a coherent exploration of chaos, because eventually the patterns in growth start to repeat as a reflection of the underlying computational foundation.

What does this mean mathematically and statistically? It means that all the important parts of artificial intelligent life could be in the mathematics and statistics. If you could find the right strategy for navigating chaos in a scalable manner, then it doesn't matter what environment you are dealing with. The same way a single cell can multiply in a physical environment, a single bit of information could grow into something like a consciousness in a digital environment. Of course the "laws" of the digital world would be radically different, and mainly be driven by human input. For example, LLMs can be thought of as having linguistic sense organs, and the subsequent "laws" they deal with are consequences of the content that humans write.

How does a loop emerge from chaos and how does chaos emerge from a loop?

Comments

Popular posts from this blog

Causality, Interaction, and Complexity

In a highly chaotic system, such as a high temperature gas, it is not ideal to use an atomic causal model. Instead, the effective causal model is to approximate to what extent each atom is interacting with every other atom. If we increase the temperature, then the number of atoms each atom interacts with should increase. As the temperature decreases, the number of atoms each atom interacts with should decrease. If we were to randomly sample any atom, then on average, the atom should interact with a set of atoms of a certain size. Instead of thinking in terms of conditional probabilities and causal implications, we think in terms of sets of interconnected events. And this is because it is not computationally effective to analyze chaotic systems in a linear manner. We can apply the same line of reasoning to sampling. If a system has a particular sampling rate, the inputs to the system are batched according to the sampling rate. In other words, the system cannot discern the ordering of ev

Time, partitioning, and synchronization

Any time measuring method inevitably runs into the issues of partitioning and synchronization. Partitioning deals with the issue of dividing a larger measure into smaller measures, and combining smaller measures into a larger measure. Synchronization deals with the problem of how a set of devices can self-correct if some of them are corrupted. The two are fundamentally related because often a choice in one determines a choice in the other. A measure is often defined by a set of synchronization points, such as the radioactive decay of an element or the frequency of a crystal oscillator. Synchronization points can often be defined as a measure of a change in space, such as the revolution of a planet around a star, or the change in energy state of an oscillating structure. Fundamental to both is the notion of change. A synchronization event can only be defined if there is a unit of space in which a change is observed. And either the magnitude of the space is large (such as the movement of

Dual network with atomic learning rates