Skip to main content

Time, partitioning, and synchronization

Any time measuring method inevitably runs into the issues of partitioning and synchronization. Partitioning deals with the issue of dividing a larger measure into smaller measures, and combining smaller measures into a larger measure. Synchronization deals with the problem of how a set of devices can self-correct if some of them are corrupted. The two are fundamentally related because often a choice in one determines a choice in the other. A measure is often defined by a set of synchronization points, such as the radioactive decay of an element or the frequency of a crystal oscillator. Synchronization points can often be defined as a measure of a change in space, such as the revolution of a planet around a star, or the change in energy state of an oscillating structure. Fundamental to both is the notion of change.

A synchronization event can only be defined if there is a unit of space in which a change is observed. And either the magnitude of the space is large (such as the movement of the stars) or small (such as the frequency of a crystal oscillator). The choice of magnitude determines how well a set of clocks are able to self-correct if they are corrupted. The larger the magnitude, the more entangled the space becomes with the definition of synchronization. The smaller the magnitude, the less entangled it is. In other words, the larger the magnitude, the harder it is to deny a synchronization event has taken place, since the event is defined by a change in a large unit of space (for example, the revolution of a planet around a star). Or in more practical terms, the larger the number of observers that are able to witness an event in a space, the more undeniable the event is among a population. It is easy to self-correct a large number of these corrupt clocks because the synchronization definition is entangled with a large volume of space.

While it seems it is always better to entangle the largest volume of space with a synchronization definition, such synchronization definitions are often terrible at handling partitioning. And this is partly due to the fact that usually synchronization definitions across larger volumes of space often lead to longer synchronization frequencies (compare the revolution of a planet around a star to a the frequency of a crystal oscillator). And this correlation might be a consequence of the universe having a maximum velocity (e.g. the speed of light). What this means is that it is often hard to use a large synchronization event to define mathematically convenient partitions. Imagine if we were to use the exact movement of the earth around the sun as the definition of a year. We would then need to partition that measure into hours, minutes, and seconds. We have no reference point to which we can easily compare to determine whether such partitions are fundamentally correct, because the synchronization definition happens at a longer time horizon.

On the other hand, it is much easier to handle partitioning if the magnitude of space is small (and the synchronization frequency if high). This is because partitioning is a fundamentally trivial problem if the synchronization frequency is higher than the partitioning frequency. If a quartz crystal oscillates at a higher frequency than whatever unit of time one is trying to define, then it is trivially defined. And these definitions will be reliable because we are entangling the space in which the raw phenomenon occurs (e.g. the space in which the behavior of a piezoelectric crystal is observed) with the definition of a unit of time. Therefore, the higher the frequency of the synchronization event, the easier it is to compose units of time. However, it is hard to self-correct a large number of these corrupt clocks because they are only locally entangled.

Truth always flows downward from the universe. The larger the truth, the easier it is to self-correct in case of corruption, but the harder it is to partition. The smaller the truth, the harder it is to self-correct in case of corruption, but the easier it is to partition.

Comments

Popular posts from this blog

Causality, Interaction, and Complexity

In a highly chaotic system, such as a high temperature gas, it is not ideal to use an atomic causal model. Instead, the effective causal model is to approximate to what extent each atom is interacting with every other atom. If we increase the temperature, then the number of atoms each atom interacts with should increase. As the temperature decreases, the number of atoms each atom interacts with should decrease. If we were to randomly sample any atom, then on average, the atom should interact with a set of atoms of a certain size. Instead of thinking in terms of conditional probabilities and causal implications, we think in terms of sets of interconnected events. And this is because it is not computationally effective to analyze chaotic systems in a linear manner. We can apply the same line of reasoning to sampling. If a system has a particular sampling rate, the inputs to the system are batched according to the sampling rate. In other words, the system cannot discern the ordering of ev

Dual network with atomic learning rates