Skip to main content

Availability, partition tolerance, and self-organizing maps

To construct a map, there must be an expectation of the environment. The CAP theorem lays out an abstract view of how agents can interact in an environment. The common utilization of a semantic interpretation of a response is what enables a map to be built. As an example, if the expectation of a response is that the responding entity must respond if it is non-failing, then a map can be built whereby the atomic expansion of the map happens all at once or not at all. Similarly, if the expectation of a response is that the responding entity may never reply at all, then a map can be built whereby the shrinking of the map happens partially all the time.

In a sense, the semantic interpretation that is used to construct the map depends on the probability of error. If the probability of error is very low, then it is a reasonable expectation that every entity must respond if it is non-failing. If the probability of error is very high, then it is not a reasonable expectation that every entity must response if it is non-failing. The meaning of "error" here means anything that transcends the semantic interpretation of a response.

From an engineering perspective, the probability of error is often a compound variable, and from the perspective of the CAP theorem, it is a reflection of how thin the data is spread over a computing substrate. The larger the spread of the data over a set of distinct agents, the more the probability of error compounds. Fundamentally, there is always a relationship between the atomic probability of error, the number of distinct agents, and the compound probability of error. In a sense, the CAP theorem is closer to biology than computer systems, because there is always some atomic probability of error that leads to an expectation of interpretation, which leads to an organization of a map.

Comments

Popular posts from this blog

Causality, Interaction, and Complexity

In a highly chaotic system, such as a high temperature gas, it is not ideal to use an atomic causal model. Instead, the effective causal model is to approximate to what extent each atom is interacting with every other atom. If we increase the temperature, then the number of atoms each atom interacts with should increase. As the temperature decreases, the number of atoms each atom interacts with should decrease. If we were to randomly sample any atom, then on average, the atom should interact with a set of atoms of a certain size. Instead of thinking in terms of conditional probabilities and causal implications, we think in terms of sets of interconnected events. And this is because it is not computationally effective to analyze chaotic systems in a linear manner. We can apply the same line of reasoning to sampling. If a system has a particular sampling rate, the inputs to the system are batched according to the sampling rate. In other words, the system cannot discern the ordering of ev

Time, partitioning, and synchronization

Any time measuring method inevitably runs into the issues of partitioning and synchronization. Partitioning deals with the issue of dividing a larger measure into smaller measures, and combining smaller measures into a larger measure. Synchronization deals with the problem of how a set of devices can self-correct if some of them are corrupted. The two are fundamentally related because often a choice in one determines a choice in the other. A measure is often defined by a set of synchronization points, such as the radioactive decay of an element or the frequency of a crystal oscillator. Synchronization points can often be defined as a measure of a change in space, such as the revolution of a planet around a star, or the change in energy state of an oscillating structure. Fundamental to both is the notion of change. A synchronization event can only be defined if there is a unit of space in which a change is observed. And either the magnitude of the space is large (such as the movement of

Dual network with atomic learning rates