10. Serial and Parallel Sympositions and Non-Hierarchical Partonomies

So far, I have not explicitly juxtaposed time with space in the context of symposition, and consequently the concept of sem-linking has by default referred to the collecting of symponents in a region of space at some specific moment. Sem-linking does not have to be restricted in such a way and can collect symponents that have spatiotemporal instead of just spatial pose. {volcanic eruption} is a symposition that is not localized at one specific moment, as is {butterfly} with its splendid metamorphosis. There are two things fundamentally different between time and space that complicate the generalization of symposition to the temporal dimension, however. First, note that if space was 2D instead of 3D—a plane instead of a volume—and was similarly occupied by point-like physical primitives, then the symposition of these primitives into a Logos would be basically the same process as if it was 3D. This holds also if space was 4D and again occupied by point-like physical primitives. The problem is that physical primitives are not point-like in our 4D spacetime; they are line-like, tracing extended paths through the dimension of time.

The other difference between time and space is that, given a bunch of points in space, there is no natural ordering to them, they’re just all there at once. On the other hand, given a bunch of points in spacetime, there is a natural ordering to them: the chronological order of first to last point.1 A consequence of this is that when a partonomy is serialized in order to put it into writing, for example with a hydrogen atom: {e, {qu, qu, qd}}, arbitrary choices must be made to put the electron before the proton and then the up-quarks before the down-quark. These choices are not arbitrary with the temporal symponents of a symposition; just put the first one first. Consequently, a clear distinction can be made between serial sympositions, whose symponents can be organized temporally in a series, and parallel sympositions, which are sympositions as originally conceived at a given moment of time whose symponents appear together in parallel.

Now, what is the partonomy of a serial symposition and how might it be different from the partonomy of a parallel symposition, and what happens when these partonomies are put together? First, let’s consider a symposition that for at least some stretch of time has parallel symponents that do not move—our atom of hydrogen, perhaps, at a temperature of absolute zero—and we can say that such a symposition is “frozen.” The self-same parallel configuration of the hydrogen atom is present at each snapshot of time, such that there is a series of {e, {qu, qu, qd}}, {e, {qu, qu, qd}}, {e, {qu, qu, qd}}, etc., that is ordered by time non-arbitrarily. Thus it is also a serial symposition. All frozen parallel sympositions are also serial. How can all of these identical only-parallel partonomies be fit into a single serial+parallel partonomy? There is a range of possibilities with two extremes; we can serially sem-link at the Dust or at the top—either as {{e, e, e, …}, {{qu, qu, qu, …}, {qu, qu, qu, …}, {qd, qd, qd, …}}} or as {{e, {qu, qu, qd}}, {e, {qu, qu, qd}}, {e, {qu, qu, qd}}, …} or in between as {{e, e, e, …}, {{qu, qu, qd}, {qu, qu, qd}, {qu, qu, qd}, …}}. They have, respectively, four, one, and two ellipses (…), two extremes and one intermediate, corresponding to the number of nodes at each level.2

Recall that a parallel partonomic gap, if sufficiently large, results in the level below the gap relating quasi-continuously to the level above the gap. The ellipses in the previous paragraph represent serial partonomic gaps, but how large are they? The fact that primitives are lines in spacetime means that they are continuous and that there are infinitely many points between any two moments of time.3 The serial partonomic gap is so large it’s infinite. Whereas parallel partonomic gaps only approach continuity, serial partonomic gaps can actually get there, and this holds regardless of at which parallel partonomic extreme of bottom or top or where in-between the snapshots are serially sem-linked. So what is the spacetime partonomy of a symposition? A final answer requires choosing a level where the serial sem-linking occurs. Such a choice seems arbitrary, however, at least for frozen sympositions, so I won’t make it, instead preserving the tension.4 Regardless, we can see that serial sem-linking in frozen sympositions is always continuous.

Two questions related to each other arise. Can there be continuously serial sympositions that are not frozen? And what would it take to have a discretely instead of continuously serial symposition, which of course could not be frozen? For a symposition to not be frozen, poses must be changing somewhere in it over time. Recall that a symposition is more than its partonomy. Though frozen sympositions clearly also have frozen partonomies, sympositions with frozen partonomies do not necessarily need to be frozen themselves. Take the Solar System. The partonomy of the Solar System, at least considering only the planets and the major moons, has not changed in millions years. Still, the earth and the sun have completed millions of loops of a continuous sequence of poses since then, and similarly all the other planets and moons. Importantly, if we tried to discretize the sequence of poses, we would have to make arbitrary choices.

When would we be able to make a non-arbitrary choice? Consider a planet that is happily orbiting the Sun until an unfortunate confrontation with another object throws it off course indefinitely into a highly eccentric orbit. This is hypothesized to have actually happened in the history of our Solar System to “Planet Nine,” which started out between Saturn and Uranus but was at some juncture deflected by Saturn towards Jupiter and then by Jupiter into the far reaches of the Solar System to live out its life with an orbital period of 10,000 to 20,000 Earth-years.5 A non-arbitrary division can be made in the temporal series before and after the planet’s deflection. The exact moment of the division may not be easy to determine, but the more important and fortunately easier question is whether there should be a division, and the answer is yes. Regardless, note that the Jumping-Jupiter scenario, as it’s referred to in astrophysics, did not change the parallel partonomy of the Solar System at any moment;6 its partonomic relevance is purely serial.

Imagine instead a spaceship that orbits Earth many times and then changes course and orbits Mars many times. The change in course affects the serial partonomy just like above with a clear before and after, but if we construct the parallel partonomy, we see that it is also affected, unlike above with Planet Nine. It goes from {{Earth, spaceship}, Mars} to {Earth, {spaceship, Mars}}.7 Let’s continue with just the initial letter of each and try to construct a full spacetime partonomy. The parallel partonomies at first are the repeated {{E, s}, M}, {{E, s}, M}, {{E, s}, M}, etc., but in the end are {E, {s, M}}, {E, {s, M}}, {E, {s, M}}. If we serially sem-link at the gravitational Dust, then we get {{{E, E, E, …}, {s, s, s, …}}, {M, M, M, …}} followed by {{E, E, E, …}, {{s, s, s, …}, {M, M, M, …}}}. Recall that choosing the Dusty bottom instead of some other level for continuously serial sem-linking is arbitrary in frozen sympositions and in sympositions with frozen partonomies, such as the Solar System in the Jumping-Jupiter scenario. When partonomies are not frozen, however, the choice is no longer arbitrary. If we try to sem-link our spaceship scenario at the bottom, we would need to be able to put {E, E, E, E, E, E, …}, {s, s, s, s, s, s, …}, and {M, M, M, M, M, M…} into a tree. This cannot be done in such a way that reconciles both the initial and final parallel partonomies. We can sem-link at the top, getting {{{E, E, E, …}, {s, s, s, …}}, M, M, M…}, {E, E, E, …, {s, s, s, …, M, M, M…}}}, but we procure a very odd result if we do so: the intrinsic serial partonomy of Earth is sliced in two by an extrinsic spaceship flying to Mars!

We can solve the problem by an analogy with taxonomies. There were two ways in which a taxonomy could fail to be a tree: by the presence of either continuous or cartesian variation. There are also two ways in which a partonomy can fail to be a tree. The first has already been covered above: continuous sem-linking does not provide a tree, nor even a graph, because both of these are discrete mathematical objects. The second way occurs when symponents belong simultaneously8 to more than one symposition, a situation that can still be depicted by a graph if no longer by a tree. Non-hierarchical partonomies are ubiquitous. They are necessary any time a symponent moves from belonging to one symposition to belonging to another. Though non-hierarchical partonomies are first defined here for temporal reasons, we will see later that they are also relevant spatially.

Figure 1. A non-hierarchical serial partonomy.

Though we need non-hierarchical partonomies to fully account for it, spatiotemporal symposition is as full of hierarchical possibility as just spatial symposition. Consider the spatiotemporal partonomy of an individual {football game} or individual {conference}; they both have many levels above the lowest discrete serial symposition. I’ve mentioned several times that the Logos has developed, grown, or changed after the Big Bang. Given serial sem-linking, however, those expressions are faulty. The Logos spans all of spacetime, planted firmly on both the primordial homogeneity at the Big Bang and the ever-present Dust. The Logos hasn’t developed after the Big Bang, but rather the horizon of its revelation has been expanding.


1. Ignoring relativistic concerns regarding frame of reference. Again, this book is focused on the classical realm.

2. In {e, {qu, qu, qd}}, there is one level between the quarks and the atom, but no levels between the electron and the atom. The in-between serial partonomy could alternatively be depicted as {{{e}, {e}, {e}, …}, {{qu, qu, qd}, {qu, qu, qd}, {qu, qu, qd}, …}}, for clarity. Also, there’s no need distinguish the serial symponents, with subscripts for instance, because of the direct mapping between the seriality of writing and the seriality of time.

3. You may object because of the Planck time. Note that the Planck time does not set a discretization of time, but rather a bound for the measurability of time. Time could go unmeasurably crazy at such tiny scales even while remaining continuous, if it behaved something like the Weierstrass function or the Koch snowflake, perhaps.

4. The tension can be explored further in the endurantism vs. perdurantism literature, although it doesn’t explore the in-between possibility.

5. http://www.nature.com/news/evidence-grows-for-giant-planet-on-fringes-of-solar-system-1.19182

6. In all likelihood there was a reorganization of moons, but we are entirely ignorant of the details, so I’ll ignore this possibility.

7. Continuing with the theme of a very pruned Solar System.

8. “Simultaneously” in a timeless, 4D sense, perhaps oxymoronically.

9. Potential Landscapes, Exergy, and Thermodynamics

Recall that gravity is monovalently attractive. If you had two hydrogen atoms far apart in space, then the parameter space of the symposition of the two would have a dimension corresponding to the distance between them. The distance between them also has a corresponding gravitational potential energy associated with it. Farther distances correspond to more potential energy, and closer distances to less. Energy is conserved, however, so as the hydrogen atoms gravitate towards each other, the gravitational potential energy lost due to their getting closer is converted into kinetic energy, making them move with ever greater velocity as they approach each other. Just like a histogram or population density landscape can be constructed over a parameter space, a potential energy landscape can also be constructed over it as well. The parameter space of the symposition of the two hydrogen atoms has a potential landscape that reaches a minimum or valley when the two atoms are in the same place.

But that analysis is strictly gravitational. Atoms have primitives that also interact according to the other forces. The potential landscape from electromagnetism is more complicated, but suffice it say that it does not go to a minimum when the two hydrogen atoms are in the same place, but rather when they are about 7.4×10-11 meters apart. The overall potential landscape from the two forces is simply the sum of the two landscapes, at least until even smaller scales where the nuclear forces also become relevant. Since gravity in this context is overwhelmingly weaker than electromagnetism, the distance between the two atoms is almost completely dictated by electromagnetism.

In general, when primitives are attracted to each other by some force, the potential energy of their symposition is highest when they are separate, and vice-versa when they are repulsed, such as with a pair of electrons. Since the Dust was distributed almost homogeneously at the Big Bang, the Universe was born with an enormous reserve of potential energy in its gravitational form. We can repeat the story from Chapter 2 in terms of energy and potential landscapes. As primitives overcame the Jeans limit and collapsed together, the gravitational potential energy was converted into other forms of energy. First, it was converted into the kinetic energy of the primitives racing together. When they reached a sufficient density, the atoms began colliding violently with each other. The collisions often stripped the electrons from their nuclei, converting the kinetic energy of the atoms into the electromagnetic potential energy of the separation of electrons from their nuclei, along with kinetic energy for the dispersal of electrons and nuclei in their own directions.

With the nuclei stripped of their electrons, electron degeneracy pressure no longer walled the nuclei off from each other. The repulsion between nuclei continued, however, with the electromagnetic component of the potential landscape of two nuclei reaching a peak with the nuclei together and fused. The residual strong force component was the opposite, reaching a deep trough with the nuclei fused. The trough was much more narrow than the electromagnetic peak, however, because the residual strong force only operates at much closer distances. Thus the combined potential landscape from the electromagnetic and strong forces had two valleys, one with the nuclei far apart and one with the nuclei fused, separated by a high potential mountain. You can imagine trying to roll a bowling ball over a hill. With a fast enough roll, the ball would overcome the peak and roll into the valley on the other side; with a roll too slow, it would just go up and then come back down on the original side.1 Nuclei racing together with enough velocity and kinetic energy overcome the barrier of electromagnetic repulsion and enter the valley of residual strong attraction. For nuclei summing to the size of iron, the residual strong valley of fusion is deeper than the unfused valley. When these nuclei fuse, the drop in potential energy from the difference in depth of the valleys is converted to other forms, such as kinetic energy and electromagnetic radiation, or light, which eventually shines into space, illuminating planets like our own.

Ordinary nuclear fusion in stars does not produce elements substantially larger than iron because the depth of the residual strong valley of fusion is overcome by the height of the electromagnetic peak in these elements. The exceptional circumstances for the fusion of elements beyond iron occur during novae and supernovae, stellar explosions, processes which are too complicated to describe in detail here. When elements far beyond iron are created, they are quite often radioactive, with the peaks separating them from their radioactive products being relatively easy to overcome. When they fission or decay and drop back down to elements closer to iron, they release potential energy.

The release of energy from nuclear fission is the primary source of heat within Earth’s core. The gradual cooling of an object can be used to determine how old it is if one has a good estimate for its initial temperature. Lord Kelvin calculated the age of the Earth a century and a half ago using the correct assumption that it originated as molten, but he did not know about nuclear fission, so he underestimated Earth’s 4.5 billion years of age as being near 100 million years. Without radioactivity, Earth’s core would have cooled a long time ago. Once Earth’s core cools, there will no longer be dissipation of energy from it into space. Currently, that dissipation is the primary factor driving the motion of tectonic plates through the convective processes churning in Earth’s mantle. The motion of the plates powers enormous stresses in the solid materials of Earth’s crust. The stresses are carried by distortions in interatomic bonds, such that they are forced out of the comfort of the lowest points of their potential valleys and up the sides of potential mountains. With enough stress, the potential valleys are finally escaped, and the partonomic neighborhoods of the involved atoms can shift quickly in an earthquake. If the earthquake happens under the sea, it can displace massive amounts of water, throwing the sea out of its own potential valley resulting in a tsunami.

Let’s summarize the pathway of potential energy from Big Bang to tsunami. Primordial gravitational potential energy is used to overcome electromagnetic potential barriers to nuclear fusion. Particularly violent events like novae overshoot the lowest potential valley of iron nuclei, storing potential energy in larger radioactive nuclei. The stored nuclear potential energy is released in planet cores through radioactive decay, powering the convection of molten rock and the motion of tectonic plates. The energy from the motion of tectonic plates is stored in stressed interatomic bonds in rock. The electromagnetic potential energy in the stressed bonds is released in earthquakes, where it can differentially raise and lower sea level. The disrupted sea level packs the energy into gravitational potential energy, full circle from the Big Bang, where it dissipates in a series of waves that transfer the energy elsewhere. The overall story is rather simple: primitives want to flow energetically downhill in the landscapes over the parameter spaces of the sympositions they participate in, but they are blocked by many potential barriers. When they do succeed in flowing downhill, they release energy, which in turn interferes with other primitives flowing downhill.

The energy differential between the actual state of a system with its primitives stuck behind potential barriers and the state of the system where all of the primitives have tunneled through their barriers and into their lowest valleys is called exergy. About 5.9 million years ago, the Strait of Gibraltar closed, and the Mediterranean Sea dried up leaving a string of very salty lakes along its floor. This created a large differential in the sea levels of the terrestrial ocean and of the shriveled Mediterranean. You can imagine being an entrepreneur and building a mill on the closed strait, together with some canals to carry oceanic water over a mill-wheel that dumps into the Mediterranean basin. Unfortunately, after about 0.6 million years of hounding investors, your venture didn’t get funded because the strait reopened, triggering the Zanclean Deluge of the Mediterranean basin.2 Where there is a differential in the height of the neighboring bodies of water, a mill can be built to extract the exergy of the falling water. If there is no differential, a useful mill cannot be built. But even without the differential, there is still gravitational potential energy because the water on Earth hasn’t collapsed to a black hole.

It is interesting to ask how much of the energy right after the Big Bang was exergy. Apparently quite a bit of it was, and it would be fun if exactly all of it was, but this is an open question in physics. Regardless, we can look into the World and try to find the distribution of exergy. We have already discussed at length the exergy in simple gravitational, chemical, and nuclear systems. A system also has exergy whenever its partonomic neighborhood includes two or more sympositions at different temperatures; the hotter ones have exergy that can be extracted by letting the heat energy flow into the cooler ones. When they all reach the same temperature, no more exergy can be extracted. Aside from these examples, exergy resides in a complicated tangle of volume, pressure, temperature, and other factors, which are studied by the science of thermodynamics. We can see that the Sun harbors an enormous amount of exergy, as does Earth’s core. A simple way to find exergy is to find those sympositions that radiate heat; life forms do, as do electronic devices, automatic machines, and our homes in winter.

When exergy decreases, where does it go? Where did it go when the Atlantic refilled the Mediterranean? The event was probably very loud, so much of the exergy ended up as sound, which is just the coordinated oscillations of interatomic bond lengths of atoms and molecules of air as they are periodically displaced to and fro at the bottom of their potential valleys. These oscillations are also called “phonons.” Incidentally, heat is exactly as the same thing, except it includes all potential valleys, not just the ones of bond length, such as those from bond angles and rotations. Exergy is lost as it diffuses into the maze of partonomic neighborhoods in the Logos, where no symposition can coordinate a regathering of it all. On the surface of the Earth, the potential foothills that it gets lost among are almost always dependent upon electromagnetism, and the last destination for it is as electromagnetic radiation into space.

Finally, it’s worth addressing the relationship between symmetry and energy. Many of you may understand symmetry as being a rough measure of order. When energy is added to a system or partonomic neighborhood, its sympositions jostle about more, and thus one could expect its order to decrease with the addition of the energy. At a first pass, then, it seems that we could expect symmetry to decrease when energy increases. Enter barium titanate, BaTiO3. Barium titanate is a crystalline compound with interesting electromagnetic properties like photorefractivity and piezoelectricity that melts (or freezes) at 1625 ºC. If you freeze it through the melting point down to room temperature, it passes through a sequence of solid phases3 that have, in order, hexagonal, cubic, tetragonal, orthorhombic, and rhombohedral crystal structure, where each is a variant on the same unit cell symposition of barium, titanium, and oxygen ions. This sequence arranged by decreasing temperature in fact proceeds from more symmetry to less.

image2Figure 1. The unit cell of BaTiO3, at a variety of temperatures, with the dielectric constant plotted. Red spheres are oxygen ions, green barium ions, and blue titanium ions. http://www.intechopen.com/source/html/48777/media/image2.png

A parameter space for the unit cell sympositions for each phase can be constructed whose dimensions measure bond lengths and bond angles between adjacent ions. A small chunk of barium titanate has far more than trillions of ions, so the histogram over the parameter space has similarly many datapoints. Each phase has a different histogram. The hottest phase has the broadest modes, since the individuals are jostling about so much and can’t decide quite where to be, and conversely the coolest phase has the narrowest modes. As the small chunk is cooled, a given mode gets narrower and narrower until a thermodynamic breaking point where it fractures into several pieces. At a high temperature, the titanium ion bounces all around the center of the unit cell, but on average, it is exactly in the middle; at low temperature, the titanium ion picks a direction and shifts off-center, breaking the symmetry with the other ions in the unit cell. Thus the symmetry is broken at the level of interatomic poses, but it is also broken on a higher level: the off-center shift propagates from one unit cell to the next, until it reaches another wave of propagation that decided to shift in another one of the possible directions. If we assume that the small chunk started out at the higher temperature as monocrystalline, that is as a single perfect repeating lattice all throughout, then it can easily end up partitioned into multiple crystal grains, probably separated by twin boundaries.4 This partitioning entails the insertion of a level into the partonomy.

We can see that the association between energy and symmetry is the opposite from what one might expect: partonomic neighborhoods with more energy have more symmetry, not less. At the Big Bang, all partonomic neighborhoods were in enormously high energy states, and these correspond to the known homogeneity of the quark-gluon plasma. What happened before the quark-gluon plasma kindles current research into theories of supersymmetry. As the Universe has embarked on lower energy states after the Big Bang,  a cascade of broken symmetries in many partonomic neighborhoods has resulted in a fracturing of populations leading directly to the infilling of partonomic gaps with more levels and the growth of the Logos upon the Dust.


1. A more realistic illustration accounts for the fact that it’s more like two balls participating together in the construction of a hill to divide them.

2. Dramatized by xkcd: http://xkcd.com/1190/

3. In general, a solid of a given compound can have many phases at different temperatures (and pressures), but the compound will have only one liquid phase and only one gaseous phase.

4. http://www.tandfonline.com/doi/abs/10.1080/14786444908561371

8. Taxonomies, Variation, and Broken Symmetries

The fundamental building block of a partonomy is the relation “part of,” and it links vertically from a lower level to a higher level in a partonomic neighborhood. Sympositions and their symponents do not often belong to the same population because they are rarely particularly similar;1 thus, a population generally resides on one, given level of a partonomy. Consequently, distillations of populations do have specific vertical localizations in the Logos even though they do not have specific horizontal, i.e. quasi-spatial, localizations like individuals and populations. Similarity, however, is obviously a matter of degree, so a population can be subdivided into subpopulations expressing greater similarity or collected into superpopulations expressing less. Subpopulations and superpopulations can both be distilled, and they can be linked by the horizontal relation “type of” into a hierarchical structure similar to a partonomy. Such a structure is a taxonomy. For instance, with the population of mammals, one can take the subpopulation of primates and the superpopulation of vertebrates. Primates are a type of mammal, and mammals are a type of vertebrate, and these relations extend within the very large and hierarchical Linnaean taxonomy of the “tree of life,” which is both a colorful metaphor and a precise mathematical expression.2

This analysis suggests that taxonomies are always trees. There are two ways in which taxonomies can fail to be trees. Recall the population of raindrops from Figure 1 in the last chapter. Take a raindrop with -30 microstatcoulombs of charge and build a hierarchy of subpopulations that allow for increasingly more variation from it. We can specify the subpopulation of raindrops with -60 to 0 mstatC of charge, and then incorporating that one in a bigger one we can specify the subpopulation of raindrops with -90 to 30 mstatC. We can, however, select a different raindrop with, say, -40 mstatC of charge and build a different hierarchy of subpopulations from it: -70 to -10 mstatC and then -100 to 20 mstatC. These latter sets neither contain nor are able to be contained by their former counterparts. So we have two different and irreconcilable possibilities for the hierarchy of subpopulations of raindrops, neither of which seems ‘better’ in any naïve sense.

The problem, of course, is that I’m forcing hierarchical structure onto the distribution of raindrops when it simply isn’t there. This is true in general for populations with one mode, and we can say that such populations exhibit continuous variation because there is a continuum of possibilities for the relevant parameter with no non-arbitrary location to divide it. Populations that can be divided and for which the divisions can be organized hierarchically exhibit linnaean variation, as the Linnaean taxonomy is the prime example. If you imagine the modes of a population with linnaean variation as being primitives that can be sem-linked, then the partonomy of its histogram is its taxonomy! Once again there is a strong analogy between physical space and parameter spaces, and there is a likeness between the vertical (“part of”) and horizontal (“type of”) orientations in the Logos.

We have only considered a one-dimensional parameter space for the raindrops, that of the charge; we could consider two or more dimensions simultaneously, such as both charge and mass. In that more general case, the raindrops could be said to exhibit multivariate continuous variation, contrasting with the univariate continuous variation over a one-dimensional parameter space. It’s possible that the values of the data points in two or more dimensions of a parameter space are correlated with each other. Raindrops with a greater charge may generally have more mass, but perhaps not in a perfectly predictable way. Thus charge and mass could be correlated in raindrops, but without being interchangeable measures. This correlation would depopulate certain areas of the histogram, but without changing the number of modes. This can be called interdependent multivariate continuous variation, in contrast with independent multivariate continuous variation when there is no correlation. By necessity, all independent and interdependent variation is multivariate, because you need at least two dimensions to have a correlation, so the “multivariate” can be dropped, e.g. as just independent or interdependent continuous variation.


Figure 1. Independent and interdependent continuous variation. On the right, the NW and SE parts of the distribution are slightly underpopulated, and the NE and SW parts of the distribution are slightly overpopulated, making the two parameters correlated. http://www.mdpi.com/applsci/applsci-03-00107/article_deploy/html/images/applsci-03-00107-g001-1024.png

Linnaean variation is a special case of discrete variation, which pertains to those populations that have more than one mode. Every population exhibits exactly one of continuous or discrete variation, given some parameter space like the natural one, since the number of modes is either one or more than one.3 In discrete variation that is not linnaean, the modes cannot be sem-linked together to create a single hierarchical tree. For instance, imagine a population of pea plants that can either be short or tall, depending on some gene with two forms or “alleles,” and can either have constricted or full pods, depending on some other gene with two alleles. Then the individuals have four ways for combining the two characteristics: they can be short with constricted pods, short with full pods, tall with constricted pods, or tall with full pods. The individuals can be plotted in a two-dimensional parameter space of plant height and pod volume. Within this parameter space there will be four modes, each having some small amount of spread. This would be an example of multivariate discrete variation.

The question then arises as to which two pairs of modes should be sem-linked first. Should we have {{{tall and full}d, {tall and constricted}d}d, {{short and full}d, {short and constricted}d}d}d or {{{tall and full}d, {short and full}d}d, {{tall and constricted}d, {short and constricted}d}d}d? Neither of these is more forthcoming than the other unless one of height or pod shape is artificially considered primary and the other secondary. Consequently, the taxonomy of the population is not a hierarchical tree, but it is instead a combinatorial crossing between all the possibilities of each allelic modularity. This is basically the same concept as the cartesian product in set theory, so I call such variation cartesian variation. Cartesian variation can be either independent or interdependent, for instance if a pea plant being tall makes it more likely or less likely to have full pods, rather than being neutral.

In sum, there is continuous variation for distributions with one mode and discrete variation for distributions with more than one mode. Discrete variation is perfected in either linnaean or cartesian variation, which are mutually exclusive. Moreover, all of these varieties of variation can be fit into a single framework. Let’s take another detour through math to demonstrate. Pascal’s triangle is a very straightforward mathematical object that is constructed as follows: take a blank sheet of paper and write a “1” in the top center. This will be the first line. Now put zeros on both sides of the one: “0 1 0.” Now for every adjacent pair of numbers on the top line, write their sum in between them on the second line. The first pair, “0 1,” adds to 1, and the second pair, “1 0,” also adds to 1: so the second line is “1 1.” Once again put zeros on both sides of the second line, “0 1 1 0,” and construct the third line in the exact same way: “1 2 1.” The fourth line is “1 3 3 1,” the fifth “1 4 6 4 1,” the sixth “1 5 10 10 5 1,” etc.


Figure 2. Pascal’s triangle. You can imagine zeros along the outside. https://upload.wikimedia.org/wikipedia/commons/thumb/4/4b/Pascal_triangle.svg/2000px-Pascal_triangle.svg.png

If you add all the numbers in a line, you get 1 for the first line, 2 for the second, 4, 8, 16, and the subsequent powers of 2. This doubling happens because each number in each line manifests twice in the sums below it. We can imagine averaging each pair of numbers instead of adding them. Doing so would make each line sum to exactly 1 and would give us Pascal’s normalized triangle. The nth line of Pascal’s normalized triangle is, among other things, the distribution of probabilities for n coin tosses that come up the same as the first toss, also known as the binomial distribution. Each normalized line can thus be considered as a probability density function, which is an expression of what the histogram over a parameter space is likely to be. As n gets larger and larger the resulting series of numbers in line n of Pascal’s triangle gets closer and closer to approximating a curve called a gaussian, popularly known as the “bell curve.” Since every line of Pascal’s triangle approximates a gaussian and every line is a split and shifted and added copy of the one above it, it follows that gaussians retain their shape under the operation of “splitting-shifting-adding”—and for the normalized triangle more appropriate for probability distributions—“splitting-shifting-averaging.”

Imagine instead of starting with “1” at the top of the page, we started with “1 0 0 1.” I don’t believe there is any sense in which “1 0 0 1” approximates a gaussian. Our subsequent lines would be “1 1 0 1 1,” “1 2 1 1 2 1”; but after that, it’s stops being so trivial: “1 3 3 2 3 3 1,” “1 4 6 5 5 6 4 1,” “1 5 10 11 10 11 10 5 1,” “1 6 15 21 21 21 21 15 6 1.” The last computed line no longer has a dip in the middle. After several iterations, it is now approximating a gaussian! Or at least it now has one mode. In fact, it is irrelevant what string of numbers you put at the top of your page, as long as it’s finite; any string will eventually become monomodal and after that start approximating a gaussian at some line, although your paperage may vary.4

We can reverse the process and split a single mode by over-shifting. Say we’re at the third line of the triangle: “1 2 1.” We split: “1 2 1,” “1 2 1.” We should get to “1 3 3 1” after shifting and adding but we shift too much: “1 2 1 0 0 0 0 1 2 1.” If we keep iterating, we’ll eventually recuperate a gaussian, but for now we’re experiencing a setback. Shifting too much doubled the number of modes. Splitting-shifting-adding, or what we can call broken symmetries, can account for both discrete and continuous variation, depending on how big the shifts are. Symmetry belongs to subpopulations that are alike, and it is broken when a factor is introduced that differentiates them. In plants, there is usually a whole array of genes that can affect height. If each of these genes comes in a taller and a shorter allele, then whether the resulting distribution is monomodal or multimodal depends on the profile of overlap between the “shifts” corresponding to the difference in height of a pair of alleles of the same gene. If all of these are small and of comparable size, then the population will have one mode, but if one of them is substantially bigger than all the rest, then it will have two modes.

These considerations are all of one-dimensional parameter spaces, however. This results from the two-dimensionality of the paper used to construct Pascal’s triangle, one of whose dimensions is taken for the splitting-shifting-adding operation. Imagine replicating the sheet of paper with its possibly mutant triangle, stacking the copies, and splitting-shifting-adding through the sheets of paper simultaneously as across them. This can theoretically be done in even higher dimensions, accounting for parameter spaces of arbitrary dimensionality. If the introduction of broken symmetries adding new dimensions is always done to the entirety of the distribution, then cartesian variation is accounted for. If the addition of a new dimension is done to one mode at a time, then linnaean variation is accounted for. Whether mode-multiplying factors are applied always to all modes together or never to more than one mode is the factor differentiating cartesian and linnaean variation. Intermediate possibilities can be procured, and these would deliver mixtures of linnaean and cartesian variation.5 Finally, there’s a distinct similarity between independent continuous variation and independent cartesian variation. Both of these are created by many independent symmetry-breaking factors, except that in the independent cartesian case, there are a few factors that have much larger shifts than the rest, with the distribution of these factors being, in effect, multimodal.

I have developed a taxonomy for the population of populations based on the characteristics of populations’ distributions. We can ask what variation this taxonomy expresses. A distribution is either continuous or discrete. This variation is discrete. Both discrete and continuous distributions can either be univariate or multivariate; this variation is cartesian. Only multivariate distributions can be interdependent or independent; this variation is linnaean. I haven’t spent much time on this, but monomodal distributions can be gaussian or logistic or otherwise depending on a large family of parameters used as multipliers or exponents or otherwise in the equations defining many types of distributions.6 The parameter space of distributions does not have a specific dimensionality since not every distribution uses every parameter, but the framework of this chapter is sufficiently flexible to account for this and many other polymorphisms of the distribution of distributions.


1. Sympositions similar to their symponents would be similar to fractals. Sympositions like mountainscapes and shorelines would be well-known examples of this.

2. “Tree” was defined in Chapter 3 as a connected graph with no loops in it.

3. The number of modes isn’t always clear, but that’s what statistical techniques are for.

4. This is also related to the fact that the limit of n-fold autoconvolution is always a gaussian.

5. There’s one specific mixture of linnaean and cartesian variation that I would like to see implemented. There are two popular ways to organize email archives. One is to put individual emails in folders, which can then be put into folders, into folders, etc. The other is to create labels and apply them to whichever individual emails are relevant to that label, often with more than one label per email. The folder strategy recapitulates linnaean variation and the label strategy cartesian variation. I would love to have a system where labels are applied to emails but where labels are placed into a folder hierarchy. By never applying more than one label per email or never using more than one folder, either system can be recovered, but the mixed system broaches basically all my use cases.

6. See https://en.wikipedia.org/wiki/List_of_probability_distributions. See also my free iOS app Sympose It on the Apple Store for an interactive implementation of the concept of symposition applied to mathematical equations.

7. Parameter Spaces, Histograms, and Modes

Say I’m trying to characterize a population. One of the best ways to do so is to take the same numeric measurement or measurements from every individual in the population and make a scatterplot of all the data points, to see how they distribute themselves. If we only take one measurement from each individual, then our scatterplot is just a bunch of points on the standard number line. If we take two measurements from each, then our scatterplot is a bunch of points on the standard Cartesian plane. If we take three or more, then our scatterplot is a bunch of points in 3D-space, 4D-space, or higher. The 1-, 2-, 3-, 4-, etc., dimensional space that we plot our points in is the parameter space of our characterization, and each dimension is a parameter.1 The measurement can belong to the symposition as a whole, or to the symponents, or to the poses between them, or sometimes even abstruse mixes of these or of these and the measuring mechanism. Consequently, a symposition with more symponents will have more potential measurements and thus a higher dimensional parameter space. Ultimately though, the choice of measurements and thus the dimensionality of the parameter space is up to the measurer, but the enumeration of symponents and poses, both at the top levels and down to the substrate, provide a natural parameter space. If I mention a symposition’s parameter space without specifying exactly how it was chosen, then you can assume I mean that natural one that does not include abstruse mixes.

If we add one dimension to our parameter space, we can construct a histogram. A histogram carves up a parameter space into a number of discrete parcels or bins and depicts with the additional dimension how many individuals are plotted in that parcel or bin. A histogram effectively shows the local density of individuals in a subregion of the parameter space. You could call it a “population density landscape,” with peaks in the landscape corresponding to high density. These peaks are called modes. The number of peaks in a histogram depends on both the choice of parameter space and the choice of bins. Consider the population of raindrops that fall during a storm. For reasons related to lightning, raindrops usually carry a small amount of net static electric charge, either positive or negative. If negative, that means there is a tiny fraction of a percent more electrons in the drop than protons; and if positive, a tiny fraction of a percent less. Figure 1 shows a 2-dimensional histogram for raindrops built on a 1-dimensional parameter space with the x-axis being the parameter of charge and the y-axis being a scaled count of raindrops with such charge.


Figure 1. Histogram for the charge of raindrops with a diameter of 1.0 to 1.2 mm. ESU = statcoulombs. http://onlinelibrary.wiley.com/doi/10.1002/qj.49708134705/pdf

The raindrop histogram has one clear mode in the middle. Figure 2 plots stars according to their temperature/color and their luminosity/absolute magnitude. More massive stars tend to be brighter, but even though mass might be a more natural choice of parameter than the brightness, there’s no straightforward way to measure the mass of a star from Earth. The parameter space in Figure 2 is 2-dimensional, and though the additional dimension for the histogram is missing, you can imagine it very easily. The histogram would have at least three modes corresponding to white dwarfs, main sequence starts, and giants, with a possible fourth mode for supergiants if they aren’t just a tail off the distribution of giants.


Figure 2. Hertzsprung-Russell diagram. http://vnatsci.ltu.edu/s_schneider/astro/wbstla2k/mytalk/isoho/hrdiagram.gif

Histograms and parameter spaces take advantage of our natural abilities to visualize and understand physical space. You can flip the conceptual connection and imagine the three ordinary dimensions of space as being a parameter space, with the Dust as datapoints. A histogram constructed from this space as it encompasses the whole Universe would just correspond to the distribution of matter throughout it. In the state immediately after the Big Bang, there was exactly one mode in the distribution, as matter was distributed evenly throughout the hot quark-gluon plasma. Today, matter is quite clumped into many modes corresponding to superclusters, galaxies, planets, rocks, chemicals, etc. The evolving Logos has split the original mode into many modes, each of which sem-links its own many modes contained within. This hierarchy of modes of distribution of matter is, at a rough pass equating position with pose, the partonomy of the Logos, which has been shaped at its top levels by gravity and the bottom levels by the other forces.

But the modes and modes-within-modes of distribution of physical primitives lie strictly within the 3-dimensional “parameter space” of conventional physical space. Any individual symposition is a hierarchy of symponents posed together. If there is a natural parameter space for any symposition, and the dimensions of that space reflect the symposition itself and its symponents and poses, then there is a hierarchy of parameter spaces corresponding to the hierarchy of sympositions where some or all of the dimensions of one space are used in the construction of the dimensions of the space above it. The Dust is constrained to the parameter space of conventional space, but sympositions live in the nearly unbounded parameter space of possibility in the Logos.


1. It’s also called a configuration space and is conceptually similar to feature, phase, and state spaces

6. Intrinsic and Extrinsic Pose, Substrate, and Partonomic Gaps

So far, the notion of symposition that I have constructed suggests that the “collection of entities” is always a collection of symponents that are lower in the partonomy and thus closer to the Dust than the symposition itself. Sympositions that behave in this way are intrinsic sympositions with intrinsic symponents, and the relationships among them are intrinsic poses. But what if one of the entities in the repetitive collection, one of the symponents, is higher or lateral in the partonomy than the symposition that is specified?

To illustrate, let’s try to come up with an intrinsic distillation that specifies stars and one that specifies moons. For something to be a star, it is sufficient for its symponents to interact in such a way that they have collapsed and rounded themselves under their own gravity and have thereafter begun nuclear fusion. Now for a moon, we might want to say that it needs to be rounded, or perhaps not performing nuclear fusion, but any specification we come up with suffers from a particularly thorny fact: if all we do is set the moon in motion around a star, it stops being a moon! In other words, a moon is a moon not because of the relationships of the symponents within it, but rather on the relationships between it and the symposition it is a symponent of and its other symponents. If an individual is a moon, then another individual is a planet and not a star. We can define “part” explicitly as an intrinsic symponent, so then the symponents of a star are just its parts, whereas the symponents of a moon are not just its parts. Not all symponents are parts. Furthermore, we can denote intrinsic sympositions as “compositions,” and my reason for coining a new word becomes clear.1 Symponents that are not parts are extrinsic symponents, and they belong to extrinsic sympositions. Furthermore, we can see that a symposition’s intrinsic poses are just its symponents’ extrinsic poses.

If all of the parts of a distillation are very highly modular, then it could be said to be part-independent. The identity of a part-independent distillation would depend entirely on its extrinsic symponents. {possession}d is an example of such a symposition, as is {satellite}d in its scientific meaning as any celestial body orbiting a planet or minor planet. A possession and a satellite can have any structure; all they need is to be associated in a specific way with a possessor and a planet or minor planet, respectively, which are outside of the possession or satellite themselves. The symposition that is highly modular in both intrinsic and extrinsic symponents is, uniquely and quite deliciously, {symposition}d. Part-independent sympositions must still have parts, however, because if they didn’t have parts, they wouldn’t have primitives and thus they wouldn’t even be present in the Logos. Since primitives are subject to the forces of the Standard Model, there is always at least some structure in a symposition at the lowest nuclear and chemical levels, which form the material it is made of and which we can call the substrate.

A symposition with very many symponents at some level, but especially at the top, has less structure than it could otherwise have. Contrast snowflakes with water. The partonomy of a tiny pile of snowflakes splits at the top into just several dozen or hundred snowflakes and then into the dendrites and finally into water molecules and primitives. After they melt into a droplet, however, the top level splits directly into many more than several trillion water molecules. If you could watch the evolution of the partonomy as the snowflakes melt, you’d see a wholesale disintegration of the levels above the substrate supporting the dendrites of the snowflakes. I refer to the missing levels in a symposition that has little structure as a partonomic gap. Partonomic gaps are particularly interesting in the context of distillation. Since sympositions above extreme gaps may not be able to distinguish among the individuals below the gap, their distillations may be better expressed not as discrete trees linking symponents above the gap to those below, but as quasi-continuous radiations.2

Many sympositions have a partonomic gap somewhere between the Dust at the bottom and the symponents near the top. We can define several classes of sympositions based on the relationship between the top levels and the substrate. A substrate-independent symposition has a substrate that is very highly modular, and a substrate-dependent symposition has a substrate that is minimally or not at all modular. {planet}d is substrate-independent because it can be made out of gas, or rock, or liquid, with no requirements as to elemental composition or anything else. Still, the substrate can’t quite be anything, because if it’s pressurized and hot enough to perform nuclear fusion, then it’s a {star}d. {neutron star}d is substrate-dependent because the bottom levels must be nucleon degenerate. {diamond ring}d is substrate-dependent because it has a symponent that must have carbon atoms in a diamond cubic crystal structure. {dining table}d is substrate-independent and part-dependent.

If there are no partonomic gaps in the symposition, then the symposition must have a dedicated structure all throughout from Dusty floor to ceiling. Such a symposition could be called substrate-connected. Molecules are very close to the primitives and are effectively substrate themselves, so they are substrate-connected but trivially so. Generally, most macroscopic sympositions are not substrate-connected, but living beings and increasingly digital technologies form very conspicuous exceptions.

Substrates, parts, and extrinsic symponents are useful for describing many sympositions, but they are a simplification. Let’s temporarily define a partonomic gap more specifically as separating two levels whenever the sympositions at the upper level collect more than 150 symponents each from the lower level. Then for an entity with 1 mole of atoms, it must have at least 10 levels above the level of atoms for it not to have a partonomic gap.3 We can easily imagine sympositions that have multiple partonomic gaps, perhaps two, such that the symposition is substrate dependent, with a partonomic gap over the substrate, with several levels of structure over the partonomic gap, and then another partonomic gap, and then finally the symposition itself, but then perhaps extrinsic symponents that themselves may or may not have partonomic gaps, etc. Something that could qualify as that is a {national railroad system}d, which has a partonomic gap between the various substrates of the rails, crossties, etc and a few foot long stretch of railroad, another partonomic gap between that stretch and the hundreds of miles of a single railroad, and then perhaps another partonomic gap between a single railroad and the system of hundreds of railroads. The level at the few foot long stretch is not the substrate, and though I could define a term for it, I don’t think it would be particularly useful. Overall, the vertical density of levels can be extensively quantified for many different sympositions, but the simple picture of substrates, parts, and extrinsic symponents captures much of the variation in the Logos.


1. And since “com-” and “-position” are both internal to the same language, Latin, whereas “sym-” is from another language, Greek, we have pun on their semantic vs. etymological differences.

2. Actually continuous radiations could be provided by String Theory.

3. 1 mole of atoms is Avogadro’s number of them which is 6.022 × 1023. 15010 < Avogadro’s number < 15011

5. Individuals, Populations, Distillations, and Modularity

The fundamental difference between a physical set and a symposition is that the latter explicitly considers and addresses repetition. The underlying premise is that an entity cannot be apprehended adequately by itself; it must be compared to other entities. The problem eliciting that premise is that sem-linking is ultimately an arbitrary act: why select these few primitives or symponents for sem-linking rather than those? Whereas physical proximity provides a useful criterion for non-arbitrariness, it is insufficient to remove it entirely, and in some cases, the criterion is downright counter to sense—a bee is part of its hive, not part of the flower it’s visiting. Other criteria may be attempted, like being fastened together, or being a living being, but what happens in all of these attempts is that there are symponents that we believe should be symposed,1 and the commonalities shared among many different examples of the belief become the criteria. I believe the most apt approach concentrates not on the development of any specific criteria, but on shared commonality itself: on repetition.

Repetition implies a multiplicity of singles. Thus we can define the individual symposition: a single, specific symposition in the World with a hierarchy of symponents all the way down to the physical primitives. The Moon is an individual symposition. Antarctica is an individual symposition. The room I wrote this sentence in is an individual symposition. In English writing, an individual symposition is often capitalized or preceded by the word “the,” and it could also just be called an individual. We can also define the populational symposition or the population: a symposition whose many symponents are similar individuals.

This definition of population suffers from the fact that the similar individuals need not be in any spatial relationships with each other whatsoever, flouting the notion of “pose” needed for symposition, in this case the symposition of individuals into a population. However, if we allow ourselves the population of populations of individuals, we can compare it to regular populations. The symponents of the population of power strips all bear a similarity pertaining to having certain symponents with certain poses. The symponents of the population of populations of individuals all bear a similarity pertaining to having symponents that are similar. Thus what is similar among the symponents of the population of populations of individuals is similarity itself and not some configuration of symponents and poses. What repeats from one population to the next is repetition. Though the original definition of “pose” is violated to accommodate the definition of “population,” the violation is justified by an additional reckoning of repetition, which is the very point of symposing in the first place.

It is trivial that the partonomy of a population has one more level at the top than the partonomy of any of the individuals in it. Consider the possibility of a symposition that, like the population, accounts for multiplicity, but that, like the individual, does not have an extra level. Such a symposition could be crafted by taking an average of sorts of all the individuals in a population. It would distill the similarities that repeat, so we can call it a distilled symposition, or a distillation. Unlike an individual and population, however, a distillation is not present in a specific place or places in the World upon a specific subset of the Dust; it is rather the ghost of repeated similarity. Thus the distillation and the individual are alike in that they have a similar partonomy, at least at the top; the individual and population are alike in that they both are grounded on the Dust; and the distillation and population are alike in that they both consider multiplicity. When necessary for clarity, I’ll identify the three with subscripts: {}i, {}p, and {}d.

Let’s compare hydrogen and helium. {hydrogen}i and {helium}i would refer to some specific atom of hydrogen and some specific atom of helium floating somewhere; both would have an exact number of primitives, but neither would ever be referred to since we basically never care about individual atoms. {hydrogen}p and {helium}p would refer to all hydrogen atoms and all helium atoms in the Universe; since the former is so much more abundant than the latter, {hydrogen}p has more mass than {helium}p. {hydrogen}d and {helium}d would refer to the abstract notions of the hydrogen and helium atoms; neither would have an exact number of primitives, instead incorporating the isotopic abundances and ionization profiles throughout the Universe of both; finally, since the former has both fewer quarks and fewer electrons than the latter, {hydrogen}d has less mass than {helium}d.

A distillation reckons and incorporates the abundances of the different individuals in the distilled population. Consider the population of shirts. Some shirts have sleeves, some don’t, some have long sleeves or frilly sleeves. Some shirts have necks, most don’t. Some shirts have pockets. Many shirts have buttons or zippers. {shirt}d must account for all of these varieties of symponents. {shirt}d has at least one symponent that can be any of multiple sympositions or none—the zipper/row of buttons for instance. Such flexible symponents exhibit modularity. Modularity is implicit in the original definition of a distillation, but I include it to highlight the fact that sympositional variation can be more than just quantitative (like in the number of symponents as in isotopy) but also qualitative; the symponent need not always come from one given population. If the number of populations that the symponent can be drawn from is large, the modularity is high, and if the number of options is small, the modularity is low.


1. Mereological nihilists and universalists have analyzed themselves into a corner where they deny this. The denial appears to me to be the most precarious contention of analytic philosophy. See Every Thing Must Go: Metaphysics Naturalized by James Ladyman and Don Ross.

4. Pose, Sympositions, Masking, and the Logos

What is the partonomy of the Universe and what does repetition have to do with it? Addressing that question is the goal of this book. We can try answering the first part of the question with the notion of a physical set, but we immediately run into the problem that physical sets do not seem equipped to tackle power strips or trees, or much of anything that isn’t nanoscopic or astronomical. We solicit sem-linking, expanding the notion of the physical set and banishing that problem, but two more problems arise which are more subtle and basically mirror images of each other.

The physical, more specifically electromagnetic, set corresponding to atomic helium is {e1, e2, nucHe}. The nuclear primitives of the atom have been reprimitivized and the new primitive has been denoted by nucHe. Reprimitivization is just a special case of sem-linking, so we can also say that the primitives have been sem-linked and the product of the sem-linking has been denoted by nucHe. nucHe is a nuclear set, but what is that set explicitly? In the last chapter, I noted that the nucleus of helium-4 was {{qu1, qu2, qd1}, {qu1, qu2, qd1}, {qu1, qd1, qd2}, {qu1, qd1, qd2}}. Helium-4 is just one of the varieties, also known as isotopes, of helium, however; another, much less common, isotope is helium-3, whose nucleus is {{qu1, qu2, qd1}, {qu1, qu2, qd1}, {qu1, qd1, qd2}}. To say both nucHe = {{qu1, qu2, qd1}, {qu1, qu2, qd1}, {qu1, qd1, qd2}, {qu1, qd1, qd2}} and nucHe = {{qu1, qu2, qd1}, {qu1, qu2, qd1}, {qu1, qd1, qd2}} is a flagrant mathematical falsehood; sets with a different number of elements cannot be equal. But we need this falsehood, just like we needed the fraudulence of reprimitivization, or else we wouldn’t be able to speak intelligently about chemical elements. So we see that the first of the two problems is that entities with different partonomies should sometimes be reckoned as being equivalent.

But what does repetition have to do with the first problem? Why is it that to speak intelligently about nuclei, sem-linked clusters of quark triplets, we use words that reflect the number of protons, positively charged quark triplets, instead of say the number of neutrons or the total number of nucleons? The concept of “isotope” should be familiar to many of my readers, but less so perhaps are the concepts of “isotone” and “isobar.” If {n, n, p, p} and {n, p, p} are isotopes, then {n, n, p, p} and {n, n, p} are isotones and {n, n, p} and {n, p, p} are isobars.1 The names of chemical elements track isotopes and not isotones or isobars because it is the electromagnetic charge of nuclei that determines their interactions after reprimitivization, which in turn determines the repetitivities of the electromagnetic sets they participate in. Atoms that bond to other atoms almost never have nuclei with 2, 10, 18, 36, 54, or 86 protons, for instance, but there is no such rigid pattern with the number of neutrons.  If the number of neutrons or nuclei determined the patterns, then the nomenclature would correspond to isotones or isobars instead of isotopes.2

The second problem is the mirror opposite: sometimes, entities with the same partonomy should not be reckoned as being equivalent. The different elements of a partonomy may be situated at different distances from each other, or be rotated, or otherwise be arranged in such a way that their partonomic breakdown remains exactly the same but their character is different. For instance, the partonomies of a flag at full mast and a flag at half mast are the same, as are the partonomies of “d” and “p,” a straight and a curved rod, chair and boat cyclohexane, the skeletons of a bat wing and of a human arm. Equivalent partonomies do not necessitate any equivalent character, even though they often coincide. In effect, there are repetitivities that have no partonomic significance, but instead differentiate among entities with the same partonomies.

Since physical position is so important for determining the identity and character of what repeats, I’ll enlist the word “pose” to describe this notion of physical set relationships dependent upon physical position. That addresses the second problem; to address the first problem simultaneously, I’ll generalize pose to also account for blindness to changes in partonomic structure that preserve important aspects of repetition, as with isotopes above. Now let me coin the word “symposition.” Actually, I flatter myself; those letters have been positioned together before, although crucially, Google reveals that the letters of “a symposition is” have not. Repairing the oversight, a symposition is a collection of entities defined by their characteristic relational poses. A symposition, importantly, can ignore variation in non-characteristic poses predicated on changes in the partonomic hierarchy, unlike a physical set, and also unlike a physical set may care very much about poses that do not change the partonomic hierarchy. I will use braces {} around one term henceforward to identify sympositions, unless I want to identify a set, in which case that will be clear. A symposition’s entities can also be called its symponents, which are also sympositions in their own right unless they are physical primitives. Sympositions, like physical primitives and physical sets, can interact. The concepts of pose and interaction are analogous, the first one highlighting more static relationships within a symposition and the latter more dynamic ones between sympositions, but it should be clear that there is no fundamental divide between the two notions.

At the scale of atoms, the interactions between sympositions are still neatly described by the Standard Model. However, as sympositions climb ever more levels, it becomes unmanageable to the point of absurdity to use the Standard Model to specify their interactions. In that case, we can say the Standard Model or its fundamental forces have been masked. The three bonding forces, however, are not masked in the same way. The strong force is masked exactly once into the residual strong force that bonds nucleons. The gravitational force is never masked because it is monovalently attractive and doesn’t have different charges like the strong force, such that all gravitational interactions involve just orbits. Consequently almost all the masking in the Universe is masked electromagnetism, which is not surprising since it is the only divalent bonding force, and indeed it is the force that supports power strips and trees.

In a partonomy, one can move both horizontally from one symponent to the next of a given symposition, or one can move vertically from a symposition to its symponents on a lower level or to the symposition it is a symponent of on a higher level. This vertical and horizontal space around a symposition in a partonomy is the symposition’s partonomic neighborhood. The partonomic neighbors of the Earth, for instance, are horizontally the other planets and vertically its own geologic layers and the Solar System as a whole. In many cases the horizontal spread of a partonomic neighborhood will be very similar to a spatial neighborhood, but only insofar as spatial relationships are a proxy for pose. Two partonomic neighborhoods that would otherwise be isolated can sometimes be connected by a symposition that allows the sympositions in both neighborhoods to interact. Such a symposition is a partonomic duct. Microscopes and telephones are examples of partonomic ducts, the former vertically and the latter horizontally. Partonomic ducts effectively expand a symposition’s partonomic neighborhood and allow it to interact with more sympositions.

So we have a Dusty Universe filled with physical primitives, and these primitives get sem-linked into sympositions on account of their pose. In turn, collections of sympositions can themselves be sem-linked into larger sympositions, again on account of their pose. Thus we can fashion for the Universe a set whose primitives are the physical primitives and whose elements are those sympositions which cannot be sem-linked further. I call such a symposition incorporating all sympositions the Logos,3 and we can observe that as the Universe has developed after the Big Bang, the Logos has inexorably shrank in the number of elements at the top and gotten correspondingly hierarchically deeper, even while retaining approximately the same number of physical primitives. The Logos can also be conceptualized as everything in the Universe that is not inherent in the Dust itself but rather in its arrangement. Finally, instead of asking “What is the partonomy of the Universe?” we can ask “What is the Logos, and how did it grow upon the Dust?”


1. Where “n” denotes a neutron and “p” a proton. The contradistinction of “isotope” and “isotone” should be obvious.

2. It’s worth pointing out that the repetitive patterns appealed to for naming nuclei are outside of them and not within them. This will be revisited later.

3. It is unfortunate that “logos” has such a close association with the “ethos, logos, pathos” trifecta of rhetoric, as the term has a very rich philosophical and religious history that has nothing to do with the art of giving speeches. Philo of Alexandria, a Hellenized Jew, described the Logos in De Profugis as “the bond of everything, holding all things together and binding all the parts, and prevents them from being dissolved and separate.” I believe his definition of the term matches mine very well. Further, the ending “-logy” derives from “logos,” and it is convenient that many of the -logies explore different parts of the Logos, such as cosmology and the cosmologos. But perhaps that is facile; less so is a contrasting of “analogy” and “homology,” which I will explore later.

3. Reprimitivization, Physical Sets, Partonomies, Levels, and Sem-linking

Since the three bonding forces vary so dramatically in strength, timing, and scale, there are domains in which each can be analyzed almost entirely in isolation. The domain of the strong force is nuclear physics, the domain of electromagnetism is chemistry, and the domain of gravity is cosmology. The bonds within each of these depend entirely on the force of the domain, and it’s useful to imagine the entities being bonded as primitives in their own right within that domain, in a process I call reprimitivization, giving a new pile of Dust. In nuclear physics, no reprimitivization is necessary because quarks are in fact primitives. The electromagnetic Dust for chemistry is electrons and nuclei, and only the latter need to be reprimitivized. In cosmology, the new primitives are stars, planets, black holes, moons, and other celestial bodies. Each of these domains has frayed edges—the frays between nuclear physics and chemistry being in contexts like nuclear fission and neutron star collapse, the frays between chemistry and cosmology in contexts like planetary formation and comet tails.

Within each domain, pairs or groups of primitives that are bonded can be bonded further, usually with bonds that operate at a different scale of strength, distance, or timing. In the nucleus, triplets of quarks bond through the strong force into nucleons, i.e. protons and neutrons, and anywhere from 2 to well over 100 nucleons bond together through the weaker residual strong force into entire nuclei. In chemistry, electrons bond to one nucleus to form neutral atoms and charged ions, to two or a few1 nuclei to form covalent molecules or polyatomic ions, or to indefinitely many to form metals. Atoms bonded to other atoms through covalent bonds can form molecules in a range from minimal clumps like carbon monoxide or water, to indefinitely long chains like polymers and plastics, to indefinitely flat sheets like those in talc and graphite, to indefinitely bulky 3D matrices like quartz and diamond. Ions bond to form salts. Molecules bond through Van der Waals forces to form bulk materials. In cosmology, primitives are bonded either in bundle of orbits like in a solar system, or in a mostly unstructured cloud of gravitational interaction like in a star cluster or a galaxy.

The act of bonding creates a relationship between two or more primitives. To characterize such relationships, let’s take a short detour through mathematics to acquire some relevant vocabulary. The “set,” a foundational notion in math, is defined as a collection of distinct objects.2 Examples of sets include the even numbers, the odd numbers, the multiples of 13, negative numbers. Of course, sets aren’t restricted to collections of numbers, so there are more examples like collections of linear functions, finite groups, differentiable manifolds, and the rest of the moonshine that mathematicians study. The notation for sets consists of two curly braces as bookends, with some stuff in between, separated by commas. Let’s say we want a set that contains the integers between 4 and 10, excluding 4 and 10. Then the braces with the “stuff in between” is: {5, 6, 7, 8, 9}. Five elements—pretty easy. There’s also some special notation for the set containing no elements, the empty set: Ø, which could otherwise by written as {}.

Now things get interesting. We can use sets to make sets! Let’s make the set {5, 6, {7, 8}, 9}. Here we have a set containing 5, 6, a set containing 7 and 8, and 9. The set has 4 elements, one of which is a set containing 2 elements. It may be confusing why it has 4 and not 5. In addition to the conventional notion of  “element,” let’s entertain the notion of a “mathematical primitive” and think of it as a mathematical object that isn’t a set.3 Then we can assert that though the set contains 4 elements, it contains 5 mathematical primitives. How about the set {5, 6, Ø, 9}? This set also contains 4 elements, but curiously contains only 3 mathematical primitives! There’s a sense in which the empty set creates something from nothing.4 More vividly, we can ask if {Ø} = Ø? Certainly not! {Ø}, on the left, isn’t empty; it in fact contains Ø. Ø, on the right, is empty; it contains nothing. So we then observe that while Ø contains nothing, it itself is not nothing. Finally, we can observe that {5, 6, 7, 8, 9} does not equal {5, 6, {7, 8}, 9} because the elements don’t match.

An important notion in applied math is the “tree,” which is a specific type of graph. A graph is a mathematical object constructed from points or “vertices” together with the lines or “edges” that connect them.5 A tree is a graph that is connected and has no loops in it; that is, there is always exactly one path along edges and vertices to get from any one vertex to any other without backtracking. The file structure of all computers as far as I know is a tree, which you can see in most file viewers by expanding all folders and subfolders if you’ve ever had enough time to waste. The leaves of a tree are those vertices that are connected to only one edge and are in a sense on the “tips” of the tree. In a computer file system, that would be all of the individual files in whichever folders. If you consider mathematical primitives as leaves, then every set has a corresponding tree, where the primitives and sets are vertices and the edges connect sets to their contents, either other sets or primitives. Such a tree can also be called a nested hierarchy, or just a hierarchy.

Figure 1. A few sets and their trees.

But what does this have to do with physical bonding? The bonding patterns of each force within its domain can be regarded as sets. A nucleus, for instance, is a set of 1-100+ sets of 3 quarks. The nucleus of helium-4 as an example would be {{qu1, qu2, qd1}, {qu1, qu2, qd1}, {qu1, qd1, qd2}, {qu1, qd1, qd2}}. A molecule, say water, would be {nucH1, e1, e2, e3, e4, {e5, nucO1, e6} e7, e8, e9, e10, nucH2}, although a conventional Lewis diagram would be clearer.6 The solar System would be {Sun, {Mercury}, {Venus}, {Earth, {Moon}}, {Mars, {Phobos, Deimos}}, {asteroids}, {Jupiter, {Metis, Adrastea, Amalthea, Thebe, Io, Europa, etc.}}, etc.}. Each of these would be a physical set, and more specifically a strong nuclear set, an electromagnetic set, and a gravitational set, respectively. The tree or nested hierarchy of each physical set is called its “partonomy,” since it breaks down the parthood relationships within the set,7 and each nesting creates a level, with the Dust at the lowest level and each reprimitivization on its own level.

Reprimitivization is quite interesting because it’s fundamentally fraudulent—there is only one real Dust heap—while still being very useful. Reprimitivization, however, is merely a special case of the more general fraudulent process of treating a collection of primitives or a collection of collections just like an individual primitive. Notice the ease with which you can talk about protons and electrons in the same sentence. Further, reprimitivization operates at levels where the responsibility of a particular bonding force for the collecting is always very clear, but often we will have collections where there is no such clarity. As in the first chapter, a power strip is a collection of sockets and wires and other parts, but which of the bonding forces is responsible for keeping them together? Which of the bonding forces is responsible for keeping a tree together? An answer isn’t terribly difficult to deduce (hint: electromagnetism), but it would be difficult to talk about the attraction and repulsion of Dust in the elevator-pitch version of the answer.

I call that process sem-linking, for reasons that will be clear later. Reprimitivization sem-links nucleons into nuclei as chemical primitives and enormous quantities of chemicals into stars and planets as cosmological primitives, but sem-linking itself covers even more ground, taking us from three quarks to a proton and from half a dozen sockets to a power strip and from branches and leaves to a tree. Basically, every node in a partonomy that is not a leaf is the sem-linking of some of the nodes in the level below it. Further, the physical sets constructed by sem-linking will have interactions of their own, which of course are predicated on the interactions of the Dust, the four fundamental forces of the Standard Model.


1. As in benzene or phenyl groups, for instance

2. In a set, the objects must be distinct. If they’re not, you have a multiset. If a set or multiset is ordered, then you have a tuple. If a tuple only contains numbers, then you have a vector. If you order numbers along two or more dimensions, then you have a matrix or a tensor.

3. Also known as an “ur-element.”

4. This is less fanciful than it sounds. Mathematical systems can construct the numbers and beyond from just the empty set, like the standard Zermelo-Fraenkel axiomatization of set theory. Integers would not be “mathematical primitives” in this case, but rather carefully constructed sets in their own right. The mathematical primitive there is just the empty set and the nothing it contains. See G. Spencer-Brown’s Laws of Form.

5. A graph also means a picture used to plot information, often in the Cartesian plane, but that is separate meaning of the word.

6. But it would also hide the two non-valence electrons in the first shell around the oxygen nucleus.

7. Also called a “meronomy,” but the Special Composition Question can wait.

2. The Condensation of the Dust

I don’t care to wed my arguments to any specific rendition of fundamental physics, but all of my arguments’ illustrations will be of things in our Universe painted by such physics, and we can better appreciate the illustrations if we know something about the style. There are four fundamental interactions that the physical primitives engage in: the strong nuclear force, the weak nuclear force, the electromagnetic force, and gravity. The exact profile of interactions reflects the type of primitive. Quarks interact through all of them, for instance, and electrons through all but the strong force. All of the forces but the weak force participate in bonding, one of the most important kinds of interactions. The weak force mostly functions as an intermediary between the strong force and the electromagnetic force, supporting the interface between the intranuclear and extranuclear environments during radioactive decay. The bonding forces vary drastically in their bonding strength, with the strong force by far the strongest and gravity by far the weakest, with electromagnetism in between.

Just as relevant as the relative strengths of the bonding forces is their valency and their relationship with distance. The strong force and gravity are only attractive, and thus we can call them monovalent, and no two primitives will be repelled through these forces. Electromagnetism, on the other hand, is both attractive and repulsive and thus divalent. If two primitives have the same electromagnetic charge, they are repelled, and if they have the opposite charge, they are attracted. A consequence of this is that if you have three primitives together, they cannot all be attracted to each other electromagnetically. With many more than three, what eventually happens is that the primitives distribute themselves evenly such that no large region has more positive charge than negative charge and thus most objects do not attract or repel each other through the electromagnetic force. When two primitives are attracted, they move towards each other until they no longer can, either because they latch into quantum mechanical structures (like electrons and protons into hydrogen atoms) or because they meet and transform into other particles (like electrons and protons into neutrons and neutrinos, with help from the weak force).

The electromagnetic force and gravity get weaker with distance. Since gravity is monovalent, larger and larger objects only attract each other gravitationally even more as the force cannot cancel locally like electromagnetism. At very large scales like the solar system, we then see that gravity ends up doing all the work of keeping everything orbiting together while the strong force and electromagnetism do none of it. The strong nuclear force, despite being monovalently attractive, comes with three types of charge, known as “color charge,” each of which can be positive or negative. Unlike gravity and electromagnetism, it does not get weaker with distance.1 Just like electromagnetism and unlike gravity, charges end up cancelling almost entirely at small scales, now within protons and neutrons, with some slight leakage beyond them that reaches the scale of an atomic nucleus known as the residual strong force, which does get weaker with distance. Protons and neutrons are the quantum mechanical structures that quarks latch into because they’re prevented from collapsing all the way by the Pauli exclusion principle, which also gives atoms larger than hydrogen their layered electronic structure.

The valency and distance-dependence of the three bonding forces combine so as to produce various domains of material density separated by quantifiable boundaries. Immediately after the start of the Big Bang, the Dust was distributed almost entirely homogeneously throughout the Universe in a very dense quark-gluon plasma. After several minutes of explosive expansion, the strong force acted to preserve tiny pockets of high quark-gluon density in the face of the expansion as protons and neutrons and nuclei like those of hydrogen and helium. Then about 380,000 years later, still a cosmological eye blink, the nuclei wrapped themselves in as many electrons as they had protons, locally cancelling electromagnetic charges and making the Universe transparent to photons, the first of which are visible as the cosmic microwave background radiation, the quintessential piece of evidence for the Big Bang. The atoms were still distributed homogeneously, however, and since the strong force and the electromagnetic force had already created their bonds, the only force left to do anything was gravity.

And gravity was ready. Since gravity wants to pull everything together, the state of the Universe where gravity has the most work left to do is the state where everything is apart, diffuse, and homogeneous. Unfortunately, a homogeneous Universe has no preferred locations where matter can gravitationally coalesce.2 Fortunately, the Universe was only almost homogeneous, however, with shallow ripples created by the spontaneous meows of probabilistic interactions dead and alive.3 The ripples in the Dust created regions in the primordial gas that were slightly denser than others, regions where gravity could take hold and pull things together. Without them, the Dust would have remained smooth, and no gravitational coalescence would occur, nor would any further bonding be produced among the atoms, and nature would have produced no more structure.4 Gravity, though not yet in conflict with either the strong force nor the electromagnetic force, was already in conflict with something else: heat. Local spontaneously-produced high-density regions in the gas could be disbanded by heatwaves pulsing through them. Ultimately, the interference of heat merely postponed the further coalescence of matter because local regions of sufficient density and gravitation to overcome the outward pressure of heat would eventually materialize in the vastness of space. This instability of gas to gravity is known as the Jeans instability, and it is the first limit between the homogeneous cosmic gas and its opposite: the black hole.

Once matter has coalesced enough, further action by gravity cannot be blocked by heat, and the gas collapses in free fall. The collapse stops when gravity comes into conflict with the electromagnetic force, when electrons can get no closer together due to Pauli exclusion, creating an outward pressure known as electron degeneracy pressure, which is much stronger than the pressure from heat that gravity has previously conquered. This is the density regime where most stars and all planets and moons lie, squeezed together by gravity but upheld by electromagnetism. If even more matter has collapsed together, then gravity is strong enough to transform the primitives, each proton vacuuming up an electron with the weak force, flipping one of its up-quarks into a down-quark and making it a neutron, while ejecting a neutrino. With the electrons and their degeneracy pressure out of the way, the matter once again collapses in free fall until the neutrons can get no closer together due to their own Pauli exclusion.

This is the density regime of neutron stars, with the amount of mass required to overcome the electron degeneracy pressure known as the Chandrasekhar limit, the second limit between cosmic gas and black holes. Our sun’s mass is only 72% of the way to this limit, so it will never become a neutron star or a black hole. Heat again acts to postpone collapse to the neutron star regime; a star above the Chandrasekhar limit will only become a neutron star once it has cooled sufficiently. With even more mass, the neutron degeneracy pressure can also be overcome, melting the neutrons together into a quark-gluon plasma. The amount of mass required for that is known as the Tolman–Oppenheimer–Volkoff limit. Finally, with even more mass, the quark degeneracy pressure is also overcome, but the limit for that has not been named because its details are even murkier than those for the Tolman–Oppenheimer–Volkoff limit. Heat, electromagnetism, and the strong force all vanquished by gravity, the sufficiently massive object implodes into a black hole, which is beyond the description of our current physics.

It’s interesting to summarize how the relative strengths of the forces map themselves to events in Universal history and in the development of local pockets. The strong force is stronger than electromagnetism which is stronger than gravity. The first bonds that were formed in the Universe were strong bonds followed by electromagnetic bonds and then gravitational bonds. When gas clouds collapse to a black hole, the first limit lies at the frontier of heat and gravity, the next at the frontier of electromagnetism and gravity, where every human has lived and died, and the last at the staggered frontier of the strong force and gravity. Just like a humid atmosphere filled with water vapor spontaneously develops flakes of snow and droplets of water, so did the Universe filled with Dust spontaneously develop flakes of planet and droplets of star in swirling galactic storms.


1.  The strong force is very strange for many reasons. Physicist still haven’t characterized it completely.

2. See Buridan’s ass.

3. See Schrödinger’s cat.

4. See “clinamen.”

1. Divisibility, Repetition, and the Dust

If you take a casual walk or snatch a few glances at your surroundings, you will probably not notice something trivially obvious. You will probably not notice that basically anything you see can be divided into parts. Of course, in retrospect that’s a forthcoming fact, but did you actually think it? Regardless, if we consider the idea of such division, one question arises immediately: can we continue dividing forever? If you were to pick something to test the question, you’d probably pick something brittle roughly your hands’ size. As you divide it and divide the divisions, your hands would quickly fail you after the first dozen or so recurrences, and your eyes shortly after as the parts become too small to identify, let alone handle. Technical difficulties would get in the way of an answer for you just as it did for the entire human species for millennia.

The question was a hot topic among philosophers, even long before certain philosophers became known as “scientists” and then “physicists.” The ancient Greeks held a variety of views. Plato and Democritus argued that if you divided far enough, you would eventually find a flurry of geometric particles like earthy cubes or fiery tetrahedrons stacking on and flowing past each other. Aristotle pointed out that this would require a void between the particles, and since “nature abhors a vacuum,” he postulated that things in the World were made of continuous elements like earth and fire that blended together in various proportions and were imbued by “form” only accessible at larger scales. The ancient Indians also participated in the debate, although it’s unclear whether there was any direct dialogue with the Greeks. In the Nyaya and Vaisheshika schools of orthodox Hinduism, divisibility ended with particles that combined two to a pair and three pairs to a triad. The Jain school, a Hindu heterodoxy, also argued elaborately for particles each with their own smell and color, among other properties.

More recently, philosophers like Leibniz presented the possibility that there might be no end to the divisibility but without it ending in continuity either. “[E]ach portion of matter can be conceived as like a garden full of plants, or like a pond full of fish. But each branch of a plant, each organ of an animal, each drop of its bodily fluids is also a similar garden or a similar pond,”1 he explained, foreshadowing future conjectures that the smallest entities might just be universes in their own right, and our entire Universe a speck in another. The debate could have continued forever in a cacophony of reasonableness or lack thereof, but fortunately our species subdued the technical difficulties, and the actual character of the minuscule trudged an evidenced trail into our minds and theories.

We are lucky to stand on the shoulders of giants. In the twenty-first century, we stand on an entire dogpile of them, or perhaps a dogpile of dogpiles. We definitively learned in the nineteenth century that the divisibility ends with a list of indivisible particles, each a pure element of chemical activity like hydrogen or nitrogen. But wait! If you’re a student of modern chemistry, you should find that statement in egregious error. This error, however, is written directly into scientific vocabulary. When developments in twentieth-century physics showed that the allegedly elementary atoms were divisible into even smaller particles, it became clear that nineteenth-century chemistry had not found the end of divisibility and had misnamed the atom—”indivisible” being its Greek meaning.

We are now aware that atoms violate their etymology and can be divided into nuclei and the electrons orbiting them. We have found that the nuclei can be further divided into protons and neutrons, both of which divide yet again into three quarks of two different types, up-quarks and down-quarks. The two types mix and match so as to give the proton a +1 electric charge and the neutron a 0 charge, and they fortunately can no longer be divided. The electron has an electric charge of equal magnitude but opposite sign as the proton, and since in an uncharged atom there are the same number of electrons in the electron shells as there are protons in the nucleus, their charges cancel out, and the whole assemblage is electrically neutral.

This picture of the minuscule is readily comparable to the fantasies of the Greek and Indian atomists. The details are, of course, radically different, but the overall projects harmonize in intent. We should ask, however, whether the modern picture is the last word; have we actually found the end of divisibility? Physicists think so, which is why they call electrons and quarks “elementary particles” without etymological disclaimers unlike with “atom” or “element.” Their reasons are complicated; in dry experimental terms, evidence hasn’t been found to the contrary, but in more profound theoretical terms, the math works out beautifully if they are indivisible. The list of particles we know fill a list of slots in the mathematical paradigms of the symmetries and regularities of dimensionality and change. To explain further than this would be to delve into theoretical physics, but that is not this book.2

There are several more elementary particles than the ones I have mentioned, but most of them are unstable and decay very quickly into the conventional ones. The ones I have mentioned are also all of a type called “fermions,” which incorrectly but effectively means they stay around a long time and are made to move around and stick together by other elementary particles, known as “gauge bosons,” which carry forces. Some gauge bosons are the glue that keep protons and neutrons and nuclei together. Another gauge boson is the photon, which carries the electromagnetic force responsible for light and the chemical bonding of electrons to and among nuclei in an atom or molecule. An important property of fermions is that they cannot occupy the same location, a property known as the Pauli exclusion principle, whereas bosons can, and this restriction will have implications for how fermions pack together.

We see that our Universe supplies a very specific answer to the divisibility question. The divisibility is finite, and it ends with particles.3 From this point forward, I’ll refer to those elementary fermions that stay around a while as physical primitives, because I don’t care to wed my arguments to any specific rendition of fundamental physics except for particleness; and I’ll refer to the four forces of the Standard Model of physics—electromagnetism and gravity among others—as carried by the elementary bosons4 as the interactions of physical primitives. Importantly, the intuitive, classical picture of discrete particles bouncing around like billiard balls because of whatever reasons is sufficient. So if you know or vaguely remember a few things about molecular bonding, but nothing at all about wavefunctions, then your physics intuition will serve as a perfectly good foundation. I’ll also refer to all the physical primitives in the Universe collectively as the Dust.

If you take another walk or glance, there is another trivially obvious fact that you will probably not notice either. You might miss that basically anything you can see repeats many times in the World—the trees, one after the next, lining the street; the power strips under your desk, violating the fire code. If we divide, we curiously observe that repetition often continues to hold. Each tree has many leaves and branches, each power strip has many sockets. Each division gives a list of parts which may repeat. It could easily be the case that the number of kinds of things grows with each division. Leaves and branches are not sockets, and plant cells are not wires. At the bottom, the physical primitives of trees could be different from the physical primitives of power strips and everything else. It is not a surprise to us that they in fact share physical primitives given our familiarity with electrons and protons and neutrons, but perhaps it should be.

In the end, we see that both things that can and things that cannot be divided repeat many times in the World. Why do they repeat? For the former, it is clear that there is something in the nature of certain arrangements of the Dust that make them good at being in the World. For the latter, this explanation obviously does not suffice, because a primitive cannot be an arrangement of other primitives. This book leaves the latter question as to the proliferation of physical primitives alone, but will try to attack the former question exhaustively in a regimented but flexible framework that will touch many scientific disciplines along the way.


1. The Monadology

2. See Zero to Infinity by Peter Rowlands.

3. You may object that I am ignoring wave-particle duality. Wave-particle duality is encountered below the quantum-to-classical transition, and not above it, so I won’t address it in this book, as my main purpose is not to discuss quantum mechanics but the classical realm. The differences between the quantum and the classical domains are profound, but they are beyond the scope of this work even while being indispensable for my larger project. See Appendix N. For now, suffice it to say that treating fermions straightforwardly like particles distorts their character very little when viewed from above by us classical beings. Alternatively, you may object that I am ignoring String Theory; see chapter 10 and stay tuned for my next book.

4. Or otherwise in the problematic case of gravity.