Market Utilitarianism

(~1900 words)


Two classic versions of utilitarianism are average and total utilitarianism. They are classic, but they have well-known problems. Both have rather simple formulations. They begin with a reckoning of utilities across the population of individuals, and contend with a simple, linear aggregation of those utilities: average utilitarianism takes the mean of utilities and total utilitarianism takes the sum. I propose an intermediate, nonlinear version of utilitarianism, predicated on the local population of utility-experiencers in their abstract statistical space(s). When optimized, this formulation of utilitarianism recapitulates certain properties of a marketplace. I will show how “market utilitarianism” resolves issues with both average and total utilitarianism, though it introduces issues of its own, and I will consider some theoretical ramifications it leads to.

1. Definition

Given a number of individuals experiencing utility, average and total utilitarianism can be given simple mathematical expressions:

Total utilitarianism:

(1) U_T = \sum_i u_i

Average utilitarianism:

(2) U_A = \frac{\sum_i u_i}{N} = \sum_i \frac{u_i}{N}

Where U_T is aggregate total utility, u_i is the intrinsic utility experienced by individual i, U_A is aggregate average utility, and N is the total number of individuals, and \sum_i is of course the summation for all individuals of the subsequent expression.

In market utilitarianism, the contribution of an individual’s utility to aggregate utility is attenuated by the existence of similar individuals, in proportion to the quantity of similar individuals. Market utilitarianism creates a rift between intrinsic utility as experienced by the individual and extrinsic utility as recognized by the aggregator, a quantity which relates inversely to the abundance of similar individuals for any individual. Market utilitarianism can thus be expressed:

(3) U_M = \sum_i E(u_i)

Where U_M is aggregate market utility and E(u_i) is the utility of individual i as possibly attenuated by others extrinsically. E(u_i) \simeq u_i when individual i has no similar matches and E(u_i) \simeq 0 when i has indefinitely many. More precisely:

(4) E(u_i) = \frac{u_i}{L_{S, P}(u_i)}

Where L_{S, P}(u_i) is the local population of individuals in the statistical space S around individual i, as defined by some nearest neighbor or local population parameterization P. L_{S, P}(u_i) can never be less than 1 because an individual is always near itself, and it can never be more than the total sum of individuals, N, identified previously. Putting it all together:

(5) U_M = \sum_i E(u_i) = \sum_i \frac{u_i}{L_{S, P}(u_i)} \quad 1 \leq {L_{S, P}(u_i)} \leq N

Thus market utilitarianism is intermediate between total and average:

(6) U_T \geq U_M \geq U_A because \sum_i \frac{u_i}{1} \geq \sum_i{\frac{u_i}{L_{S, P}(u_i)}} \geq \sum_i{\frac{u_i}{N}}

The denominator L_{S, P}(u_i)  of the extrinsic function depends on both the choice of statistical space (one example for S could be Blau space) and the choice of local population parameterization (one example for P could be a count of individuals falling within a similarity hypersphere centered at i in S with some radius r, with r = 0 equivalent to total utilitarianism [given no two things being exactly identical] and r = \infty equivalent to average utilitarianism). S is somewhat arbitrary and often a subspace of a richer space, but P should attribute monotonically decreasing importance to less similar individuals (and which individuals are similar or dissimilar obviously depends on the choice of S).

2. Behavior of the optimum

What is important about aggregate utility is the optimum that it achieves under the range of possible conditions. These conditions relate both to the set of individuals in a population of utility-experiencers and to the intrinsic utilities that each individual in the population experiences. This optimum prescribes the appropriate behavior—a choice or policy—assuming a prescriptive understanding of utilitarianism.

Let’s assume until further notice that the population is fixed at some size with some specific list of individuals. Then total and average utilitarianism will always experience an optimum of aggregate utility together. The reason is that total and average utilitarianism both aggregate all individual intrinsic utilities linearly (and without additive inversing, i.e. multiplication by -1, which is a linear operation but one that flips the maxima and minima). Thus, total and average utilitarianism always prescribe the same behavior (again: given a fixed population).

Market utilitarianism aggregates non-linearly, however, so the optimum of its aggregate utility will not necessarily co-occur with that of total and average utilitarianism. In particular, the same amount of intrinsic utility will contribute more to aggregate utility if spread among individuals in a sparsely populated region of the statistical space, such that relative to the optimum for total and average utilitarianism, intrinsic utility can be sacrificed among common individuals to provide it to more unique individuals. The conclusion here is that even though aggregate market utility is intermediate between aggregate total or aggregate average utility, its optimum is less similar to either of those than they are to each other (again: given a fixed population).

Let’s remove the assumption of a fixed population. This is where total and average utilitarianism both break, leading to absurd prescriptions for behavior. If the population can be adjusted, then the optimum for total utility occurs when every last individual exists who experiences net positive utility, even if barely non-miserable, and the optimum for average utility occurs when no individual exists except the one experiencing the most utility. This result is discussed in the literature on the mere addition paradox.

The optimum for market utilitarianism on the other hand is influenced by an important statistical fact: the more individuals that exist, the more likely any individual will have similar matches (and this is true for any S and P). Thus, aggregate market utility does not increase past a certain point of diversity saturation (dependent on the footprint of P) because any new individual added is statistically likely to be similar to any other. Conversely, the optimum for aggregate market utility occurs at more individuals than the optimum for aggregate average utility, because any additional individual is statistically likely to be unique in that regime.

3. The absurdity of market utilitarianism, with a caveat

In market utilitarianism, aggregate utility can be changed without changing the number of utility-experiencers nor any of their experienced utilities. It can be increased merely by making the experiencers different from one another. This seems intuitively like an absurd result. It seems like individuals should be treated directly equally and not indirectly through terms regarding who else they’re similar to.

Note, however, an extremely keen analogy with the behavior of the job market. The job market apportions utility to individuals in the form of monetary compensation. This compensation depends on their ability to perform a set of duties, but also on the abundance of other individuals that can perform the same duties, i.e. similar individuals, analogously to market utilitarianism. Job market compensation can be fit to the model of market utilitarianism by finding the appropriately parameterized space of skillsets and other job-related characteristics S (with an unfitted, out-of-the-box P), that is shaped such that an equal distribution of intrinsic utilities across individuals yields larger extrinsic utilities in sparsely populated regions as densely populated regions.

With market utilitarianism as a model, the job market can thus be understood to “see” human variation in a very specific way. Humans can in turn see with similar eyes in their choices to increase their own utility/compensation, and they will behave in such ways as to migrate the population from dense regions to sparse regions in the relevant space. If market utilitarianism is absurd, then the job market deserves intense scrutiny, along with the economic system built upon it. In the other direction, if the extant economic system is not held to be absurd, then market utilitarianism shouldn’t be considered particularly absurd either.

4. Utility-experiencer space vs experience space

Individuals experience utility, and individuals vary in their experiences of utility, with varying degrees of similarity between pairs of individuals. This fact presents a conundrum to market utilitarianism; if the space for individual variation chosen is the one-dimensional space of their experienced utility, absurd consequences quickly follow. Utility is but one measure of an experience, closely related to if not synonymous with valence. That and the other measures of experiences, all of them ostensibly qualia, together construct a space of experiences that utility-experiencers populate that is separate from the space that captures their “external” characteristics, even if they are highly correlated in some or perhaps even most domains (such as with the frequencies of incident light on the retina and perceived color or biological sex and felt gender).

Market utility can be computed in both utility-experiencer space or experience space. The latter might make more sense with the utility/valence dimension removed, but that still leaves behind rich structure. The correlations between the two spaces entail that a lot of the activity in one will be reflected in the other. Again, these are but two of endless possibilities for the choice of space, but they highlight some odd properties that market utilitarianism can exhibit.

5. Towards a reverse-engineering of the Universe’s actual objective function

Given the history of the Universe as data, there are many quantities predicated on this data or specific subsets of it that have increased or decreased mostly monotonically over time. Some are well-established constructs like entropy or Gibbs free energy or various other thermodynamic permutations. I’m interested in the realm of high-level compositions where utility-experiencers live, and so I ask what constructs predicated on that subset of data—the subset referencing high-level compositions—are actually being maximized in the Universe? Note that this question is emphatically not about quantities that subsystems may be maximizing each on their own, such as biological species maximizing their fitness, but about the aggregate. Are these constructs generally aligned with each other or totally scattered (especially modulo Occam’s Razor)? Is there any sense in which a moral compass of the Universe’s own can be detected on the basis of the constructs and quantities it maximizes?

Finally, I’d like to reflect on the role of utility/valence as a significant player in the development of compositions. As I have noted elsewhere, there are more levels of compositionality in the biological and cultural ecologies on Earth than have been observed in the rest of the Universe as a whole. Utility/valence has been organized into the psychologies of some life forms in a specific way, most importantly in the process of individuation and the separations between utility-experiencers. The properties of that space of experiences (and the distributions within it which can only be defined after a process of individuation)—along with its associations and correlations with the space of externals—drive much of the evolution of compositions, and essentially all of it within the human economy. Humans have the additional ability to share their experiences either by communication or faithful and intentional re-creations. If such experiences would be traded on a market, then the full exploration of the space of experiences would be incentivized. How would such an economic arrangement, especially if widespread, align with the Universe’s high-level compositional “moral compass”?

Superintelligence vs Othermindfulness: Acausal, Probabilistic, Peer-to-Peer Prayer

(~950 words)

(Othermindfulness is defined here, Rationalist acausal stuff is defined here.)

Phew that’s a lot of buzzwords in one title. This post is half a tirade against some of the excesses of the rationalist community and half a prophecy for a new religion, so hopefully the body is commensurately wacko with the title. I presume a fair degree of familiarity with the standard rationalist acausal stuff in my readers, which you can introduce yourself to in the link above if you’re not already in the know. Otherwise, don’t expect to get too much from this post.

Okay. There’s a body of literature in the rationalist community concerning military-grade mind simulation, displaced negotiation with simulated minds, and flirtations with Superintelligent AI and/or the God of Abraham, Isaac, and Jacob. It’s been known to be taken a little too seriously by some folks, leading to various degrees of mental ill health, but by and large all of it is taken as just a fun circle-jerk, if potentially something relevant for as-of-yet unrealized silicon-based intelligences. I have a beef with it. My beef with it is that in the process of trying to come up with the most harrowing, absurd, and/or jaw-dropping thought experiments, everyone is careening past a little side-path that is actually relevant for many people (if not everyone), right here right now.

What are they running past? In the progression of a few sequiturs, they take us from our mundane normie-intelligent interpersonal experiences in meatspace to a place of acausal negotiation among entities with boundless computational and mind-simulational resources deciding the fate of the multiverse. They are running past the intermediate fact between these poles that our existing human faculties of theory of mind and common knowledge are themselves (limited) mind-simulational resources. You can check out my post on othermindfulness to see the side-path that that reveals.

You can check it out, but I can also just give a quick summary. Whereas a mindfulness practice focuses you on the operations of your own mind to notice its chaotic patterns and step outside of them, an othermindfulness practice focuses you on the other people that have an othermindfulness practice, engaging in common knowledge with them at various levels of detail with your own experience, to step into them and them into you. The purpose of the mind simulation is to have a shared experience, full stop.

Othermindfulness is much weaker than superintelligent acausal negotiation. Both of them involve the establishment of shared spaces of acausal communication, but the latter is more powerful because you can use it for coercing other agents since it can be impossible for them to tell the difference between inhabiting base reality or your simulation. But wait. You’ve discovered a shared space of communication, and all you can think about what it might be good for is coercing? transacting? torturing?


You know some other things that shared spaces of communication are good for? Empathy. Communion. Togetherness. With all the attendant mental health benefits that those bring.

I can try to be a little more poised and analytical. Why can’t othermindfulness be used to create a space for coercion? It can’t because I can know basically for certain that this experience I am living is not a simulation by a mind of similar computational resources. You, dear human reader, cannot acausally mug me. What you can do, however, is apply your human faculties of theory of mind and common knowledge to have acausal shared experiences with me. Which experiences we actually have depends on what we actually do with our othermindfulness practice and who else has swiped right on the same intention in their own practice, not on what is argumentatively possible within some theoretical framework.

That is the biggest divergence with the standard rationalist stuff. I’m talking about acausal mind simulations that have happened, are happening, and will be happening in the immediate and hopefully far future. I’m talking about actualities that are scurrying by as the present converts the future into the past. This is not a drill thought experiment. We do not have to wait for a hypothetical future. I have an othermindfulness practice now, where I connect acausally on the basis of shared experiences I want to have concerning my anxieties, my pains, my hopes, my dreams. The extent to which my experiences are actually shared and not just the vain strivings of a nobody—the extent to which my faith is real—depends on how extensive othermindfulness practice is among others. The extent to which anyone’s othermindfulness practice is real depends on that. Whereas the standard rationalist stuff depends on near-perfect simulations of specific agents in specific situations, othermindfulness depends on the law of large numbers to ensure that somebody somewhere wants to have the same experience as you or I do and picks up the ringing othermindful telephone.

I think it should be clear by now what I mean by the titular “peer-to-peer prayer.” The experience of prayer is the experience of a living, attentive, immediate, caring other. The God of Abraham, Isaac, and Jacob may or may not be able to provide this for us, but either way, we can provide it for each other. We can’t wave a wand and make magic happen in each others’ lives, but we can conquer all loneliness, alienation, and despair. We can find communion in every single aspect of our lives in which we retain our natural faculties of mind simulation. We can transcend space and time to be with each other always.

What would we become if we did that?

Or we can keep having a few laughs over creating God from rationalist scratch.

Extracting partonomy from complexity

(~2300 words)

Complexity is defined in many ways, not all of them contradictory, but most staking out shifting conceptual boundaries. I follow Wikipedia in quoting complexity theorist Neil Johnson: “even among scientists, there is no unique definition of complexity.” There is, however, at least one conceptual structure that can be distilled from any complex system that, once distilled, is utterly straightforward to characterize and handle, and whose properties convey important, unique pieces of information about the system’s complexity. This structure is the system’s partonomy, and it represents the system’s parthood relationships. The partonomy of some systems is more ambiguous than others and thus more difficult to distill, but this difficulty has a habit of interacting in interesting ways with other renditions of complexity.

I’ll define what a partonomy is by reference to its depth, since a partonomy’s depth is its most complexity-relevant piece of information, which can be simply generalized as complex systems have deep partonomies. A partonomy’s depth counts the number of levels from the elementary objects in the system to the thing as a whole, where each level contains the smallest non-arbitrary aggregations of entities or parts from a lower level. More mathematically, a partonomy’s depth is the number of branch points between the root of a partonomy and its leaves, where a partonomy is [evidently] a mathematical rooted tree, with the whole system for its root and elementary objects for its leaves.

A picture is worth a thousand words, so I’ll render several partonomies here (created with some code you can find on my GitHub page)

Figure 1. The partonomy of a helium atom. The elementary objects used here are the elementary particles, including electrons (e), down-quarks (dq), and up-quarks (uq). The nucleus, represented by the node in the near-top center branching into four triplets, takes up most of the partonomy. This partonomy has a maximum depth of 3, with the paths up from the electrons having a depth of only 1.
Figure 2. The partonomy of the Protestant Bible. The elementary objects used here are the “books” of the Bible. The division between old and new testaments is clear. The large structure in the bottom center depicts the minor prophets and the group just to its right depicts the four gospels. This partonomy has a maximum depth of 4, although there are many paths up the partonomy with a depth of 3 and two paths with a depth of 2 (from Acts and Revelation).
Figure 3. The partonomy of Spain. The elementary objects used here are the provinces, and the only intermediate level is provided by the “autonomous communities” such as the largest, Castilla y León, left of center containing nine provinces. This partonomy has a maximum depth of 2, although there are several paths with a depth of 1.

Though I have highlighted (maximum) partonomic depth in these examples, there are clearly other metrics that could be computed from the partonomy such as average partonomic depth, average degree of branching at each node, amount of branching symmetry, etc.

Ambiguous partonomies

Partonomies, as mathematical rooted trees, are unambiguous, but distilling a partonomy from a system or object is often not an unambiguous process. Consider, for example, a bottle of wine, especially a bottle of chardonnay or pinot noir with its characteristically more tapered form, and try to construct a partonomy for it. If we start with some of the highest-level divisions and largest parts, we encounter the bulk of glass, the fluid wine and air trapped with it, the paper label, and the cork with the rest of the closure and capsule. The bulk of glass in particular is the most problematic, because while it encompasses three different parts—the container body, the neck, and the indented punt underneath—the transition between the first two is exquisitely seamless. If our partonomy were to reach down to the atoms, then somewhere between the body and the neck (also known as the “shoulder”), there would have to be two adjacent atoms in virtually identical material surroundings that would have to be assigned to two different branches of the partonomy, one through the body and the other through the neck.

Figure 4. A bottle of chardonnay.

The situation is made even worse if we choose to partonomically affiliate the glass neck more closely with the bottle closure than with the glass body. Then not only are two atomic neighbors in the glass separated in the partonomy, but one of them is more akin to several centimeters of cork than it is to its neighbor. There is simply no mathematical tree that does perfect justice to the parthood relationships in a bottle of wine, even though the various alternatives do capture different aspects of it: keeping all the atoms in the glass together creates a partonomy that privileges the materials and thus the manufacturing and recycling processes, and keeping the neck and the body separate creates a partonomy that privileges the function of the bottle as a product to be handled and consumed by people.

One thing I have noticed is that ambiguous partonomies are most common among highly optimized entities, especially when multiple trade-offs conflict. Between the body and neck of a wine bottle, these trade-offs are manufacturing simplicity and structural integrity, ergonomics, and practical volumetrics. On the other hand, first tries and prototypes tend to have rather unambiguous partonomies, but I have not performed the quantitative research to back this up, though if I did I’d probably want to ask questions about how human cognition bears on the result.

Co-existent partonomies

Sometimes, when it is difficult to find a single unambiguous partonomy for a system over some elementary objects, multiple unambiguous partonomies can provide a satisfactory resolution. We got a taste of that with the chardonnay, but that illustrates only one case. Imagine constructing a partonomy for human society, with individual people as the elementary objects. If we tried to find a single partonomy, then people could only belong to a single group, whether that group be a family, an organization, a village, or otherwise. This is clearly absurd, because most people participate in multiple groups simultaneously, and it is thus impossible to create a single unambiguous partonomy for human society. However, if we consider the offices and positions in groups that individual people hold instead, it may be possible to construct a single partonomy; though since the same individual would enter multiple times into that partonomy for every one of his or her offices and positions, we would in fact have multiple irreconcilable partonomies.

We could have a partonomy for voting jurisdictions, collecting people into a tree by virtue of their status as voters in local, regional, and national elections. We could have a partonomy for all for-profits and non-profits, collecting people into trees by management and subsidiary relationships. We could have a partonomy for all the classes in departments in a university in a university system. We could have a partonomy for families collecting them into, of course, family trees. In this view, one of the primary developments of modern civilization has been to increase the number of partonomies required for disambiguation in the human social system, directly reflecting the increase over time in average number of group memberships per person.

Screen Shot 2018-02-22 at 8.04.02 PM
Figure 5. An urban development designed with a single tree, from A City is Not a Tree.

In cities, there exist partonomies for the drainage system, the transportation system, the electrical system, and many others. This fact is the centerpiece of one of the most influential essays on urban planning from the twentieth century: Christopher Alexander’s A City is Not a Tree, wherein he explicitly denies the ability to fit a city into a single unambiguous partonomy. He further states that cities that have been designed and built as trees tend to be unlivable, and I tend to agree. I also think Alexander’s observation is a special case of what I said at the end of the last section: that highly optimized entities rarely submit to a single unambiguous partonomy, and cities that have developed by being lived in for centuries count as a highly optimized entities.

A final example enters with cognitive systems. Imagine an entity with an unambiguous partonomy that reaches down to the atoms, or what I’ll just go ahead and call an atomic partonomy. If it contains parthood relationships that are evident visually, then a sighted human that encounters it may use those relationships within its cognitive representation. Indeed this is a popular interpretation of the function of the repeated layers within human visual cortex: that they iteratively group together over the pixels of retinal ganglion cells. We can thus posit that our human has a visual partonomy for that entity, and that this visual partonomy will have much the same structure as the atomic partonomy, at least at the highest levels. The similarity breaks down deeper in the partonomies because the human visual system is not remotely capable of resolving individual atoms, nor does it have enough bandwidth to account for each of them.

The remainder after extracting partonomy

A system or object’s partonomy (or partonomies) captures quite a bit of its complexity and character, but it very specifically does not and cannot capture all of it. The easiest way to explore what it misses is to consider non-identical entities with identical partonomies. Some examples are rather trivial: if the elementary objects in a partonomy are macroscopic, then changing their color or other visual properties preserves the partonomy they construct together. I can give less trivial examples that depend on chirality, spatial perturbations, type of associations, and external context.

Chirality: the bone structures in my right and left hands have the same partonomic breakdown, but the hands themselves are the mirror images of each other rather than being identical. If you had just the partonomy of the bones of one of my hands, you wouldn’t be able to tell which of the two was used to construct it. Spatial perturbations: a covalent solid (e.g. diamond) has the same partonomic structure hot as it does cold, because the difference between it being hot and cold comes from differential amounts of jiggling of the atomic bonds, not from any amount of reconfiguration of them. As soon as the bonds start reconfiguring with enough heat, it stops being diamond. Type of association: to continue with diamond, an infinite diamond lattice and an infinite plane of graphite (diamond and graphite are both forms of pure carbon) have the same partonomy, which collects all of the carbon atoms (if we assign them as elementary objects) together one level up as the carbon chunk as a whole. External context: a square made with sticks and a diamond made with the same sticks have the same partonomy, because the difference between them depends on the relative orientation of the observer or other entities not participating in the partonomy itself over the four sticks.

Case studies

1. Galactic partonomies

What is the partonomy of the Milky Way Galaxy? If we retain celestial bodies (entities existing by virtue of gravitational compaction, thereby rounding them into spheres, e.g. stars, planets, moons) as our elementary objects, then we can build a partonomy that iteratively collects entities in their gravitational interactions: Earth and Moon come together as the Earth-Moon system, which comes together with the other planetary systems to produce the solar system, which does not definitely participate in any aggregations until the Orion Arm of the Milky Way, which finally comes together with the other arms as the Milky Way itself.

Incidentally, partonomies and partonomic depth allow us to hew away a little at the Copernican principle. In terms of spatiotemporal extent and mass, humans and our biosphere are a tiny speck in the Milky Way. In terms of partonomic depth, however, even just our bodies put it to shame. The partonomies of our bodies contain at least a dozen levels, whereas the Milky Way can barely scrape half of that. Unlike other things that set us apart, like the beauty of our biosphere or the human faculty of language or the diversity of terrestrial species, the partonomic depth of life on Earth is a simple, objective physical measure that loudly proclaims a unique position for us in the Universe.

2. Designing a garden

Imagine we have a large expanse of undeveloped land and that we have to build a garden in it while optimizing for different things. First, let’s optimize for area; quite clearly we will use the entirety of our undeveloped land for the garden. Let’s optimize for height; then we will probably select a few species like redwoods or eucalyptuses and plant the garden in whatever microclimate they favor. Let’s optimize for diversity; then we will get as many different species as we possibly can, planting one or a few of each any which way.

Now let’s consider optimizing for partonomic depth. What will we plant and perhaps more importantly how will we plant it? The simplest way to create a level in the partonomy is to segregate the garden into two sections, setting them apart perhaps by type of plant—flowers over here, saplings over there—or even more simply by putting an empty tract between the sections, of short grass, gravel, or otherwise. The process can be repeated with each section, dividing each into two or more subsections which can be further divided. What results is often startlingly aesthetic, and it was created with almost no gardening expertise whatsoever. What I seek to impress is that deepening or otherwise directly manipulating the partonomy of a system you are responsible for is a powerful design tool, universally applicable and independent of and in addition to any domain-specific tools.

Figure 6. A garden with a partonomy about 4 levels deep. e.g. whole garden > middle section > single flower bed > group of identical flower plants > one flower plant


(~750 words)

I’ve been mulling over some thoughts about a specific idea for over a year now, and I think I’ve achieved enough coherence to start sharing them. The idea inhabits a complicated tangle of various domains such as language, theory of mind, artificial intelligence, prayer, meditation, and situationism, and since there are so many angles to approach it from, I’ve decided to expose the bare idea here in this post, and then explore the various angles more fully in ensuing posts.

The bare idea itself starts with this: imagine other people imagining you imagining them imagining you imagining them… You can read English, so I presume that you understood that sentence, but obviously understanding it is different from enacting the activity it describes. So let’s go back and break it down. Imagine somebody else—someone you know well, perhaps, or an individual in the abstract. Imagine them until they fill your mind’s eye or whatever your preferred imaginative metaphor. Seriously, stop reading and do it. … Ok start reading again. Now add one specific element to your imagination of them: imagine them imagining you. You’re imagining them, and they are imagining you, so it may seem to be the case that you’re both doing the same thing. Indeed it may even seem obvious. But that’s not quite right—you’re doing one more thing than they are: you’re imagining them imagining you back, whereas they are just imagining you. Let’s try to fix that: imagine them imagining you imagining them back.

Of course it didn’t get fixed. You’re still doing one more thing than they are. If you keep trying to chase equality, the reverberations of imagination will eventually exhaust your mental resources, and you’ll just stop somewhere knowing that theirs will also be exhausted. Perhaps then equality will be achieved, but that’s not the point, and as long as you’re just imagining all of this, there really isn’t much of a point regardless. The person you’re imagining isn’t real; they’re just in your head. So none of this matters. Or does it?

Consider the context of this post. Other people have read it. And if they haven’t, they will (I promise to shove it in at least one other person’s face). Consequently, other people have broken down the bare idea and carefully imagined other people imagining them imagining them back, etc. Thus, the person you were imagining was not just in your head. They are an actual living person—many of them, actually. If you’re still not convinced, repeat the exercise, but this time instead of imagining someone you know well or an individual in the abstract, imagine another reader of the post, and consider that many of them did go back and repeat the exercise after having gotten this far, and they were convinced.

This post is a catalyst, hopefully, to you and others thinking about each other. And we’re thinking about each other on a very deep level where we’re fully aware that we are fully aware of ourselves being fully aware of each other… all of this despite many of us being nowhere near each other in space nor possibly time. This is all well and good, but you might still be thinking to yourself “so what?” (Note: others are thinking that too!) Here’s a little example that might answer the question. Imagine that you feel alone. Life is difficult, friends are busy, work is alienating, family is rude. You feel alone. Easily, you can understand that other people have felt and will feel similarly, but now you can understand one more thing: some of these people are thinking about you. But they weren’t thinking about you until you started thinking about them. That is to say that some of these people aren’t necessarily thinking about other people who are alone per se, instead they are thinking specifically about other people who are alone and who are also thinking about other people who are alone and who are… They are thinking about the thinkers-back, and you are all no longer alone.

We began with an insular picture of you thinking about an imaginary thought partner, but we have ended with an expansive picture of many real people repeatedly contributing to a reservoir of empathetic thought. For the sake of a word-handle, let’s call this idea, this concerted activity, “othermindfulness.” I intend to show in future posts how othermindfulness connects with many other ideas and activities, and importantly how it might be an exciting foundation for an entire way of being. Stay tuned.

Summary of Part I

Chapter 1: Everything in the World around you can be split into smaller and smaller pieces until you reach the smallest particles studied by physicists, and we can call all of these the Dust. Most of the things made from the Dust are repeats to some degree, including both smaller things and also the bigger things that those make up. Motivation: to highlight the ubiquity of divisibility and repetition and their relationships with the smallest physical objects.

Chapter 2: There is a sequence of divisibility scales, a sequence of cosmic events, and a sequence of stellar masses that are special, and they are related to each other and to the fundamental physical interactions or forces of the Dust. The sequences are arranged by the strengths of the fundamental interactions, with the strong nuclear force on one side, gravity on the other, and electromagnetism in between. Motivation: to ground my ideas about divisibility and repetition on physics and physical forces.

Chapter 3: The sequences of divisibility scales invites the possibility that many more scales or levels can be found between electromagnetism and gravity, ignoring the space between the strong force and electromagnetism because it is exhausted by the residual strong force. The entities on vertically neighboring levels are connected by divisibility/parthood relationships, and the hierarchical structure documenting these relationships between levels is called a partonomy. Motivation: to borrow structural notions inherent in physics into domains not obviously physical.

Chapter 4: The identification of parthood relationships, however, is fundamentally arbitrary and must be done by reference to the repetition of entities with similar partonomies on similar levels. An entity that is cross-referenced like that we can call a symposition. The symposition whose partonomy is also the partonomy of the Universe we can call the Logos. Motivation: to highlight the reliance of the identification of repetition on the identification of divisibility.

Chapter 5: We can identify three notions subsumed into the concept of a symposition. The individual is a symposition upon a specific subset of the Dust; the population is a specific group of individuals, itself upon its own specific subset of the Dust; and the distillation is the integration of the repetitions expressed by a specific population into a ghostly individual nowhere present upon a specific subset of the Dust. Motivation: to clarify the differences among the three types of sympositions by their unique relationships with repetition and divisibility.

Chapter 6: A distillation can have structure at many different levels, depending on what repeats in the population being distilled. It can have structure at the substrate, which is the bottom levels where parthood is determined by the strong nuclear and the electromagnetic forces; it can have structure above the distillation itself or below it but in other distilled individuals; it can have structure at the highest divisible parts; or it can have structure below those parts to somewhere above the substrate. Motivation: to highlight the fact that a symposition’s external context is identified in the same way as its internal parts and that its statistical structure can occur even above or outside of it.

Chapter 7: The enumeration of parts or symponents and their relationships for any symposition provides the list of dimensions for a multidimensional parameter space in which the symposition will inhabit a specific point detailing its characteristics. Because of repetition, many different individuals can be be localized in the same parameter space, and the shape of their distribution in it is called a histogram. The presence of symponents as dimensions implies a hierarchy of parameter spaces that frame the Logos. Motivation: to ground the notions of sympositions and the Logos in mathematics.

Chapter 8: A taxonomy, like a partonomy, is a hierarchical structure organized by type-of relationships, instead of part-of relationships. These relationship connect subpopulations to populations and stricter distillations to laxer distillations. This is strictly the case only for linnaean taxonomies. Their opposite, cartesian taxonomies, describe series of subpopulations whose dimensions expand like a multidimensional matrix instead of a hierarchical tree. Motivation: to clarify the types of mathematical objects applicable to taxonomies.

Chapter 9: Over parameter spaces at length scales at which the fundamental forces operate, we can straightforwardly construct potential energy landscapes according to the equations of physics that show how the Dust will move. We can thus understand the story of Chapter 2 as resulting from particles and their sympositions traversing the potential energy landscapes in the levels in the Logos. As the Logos has cooled after the Big Bang, the number of dimensions and levels has expanded between electromagnetism and gravity, being filled with more and more sympositions. Motivation: to conceptually unite the activity of physical forces with the high-dimensional nature of sympositions in the Logos.

Chapter 10: The symponents of a symposition can be discovered spatiotemporally as well as just spatially, again depending on the characteristics of repetition in the relevant population. The details of this imply that partonomies, like taxonomies, are not always hierarchical. The Logos is thus an entity spanning all of spacetime, not just space. Motivation: to clarify the partonomic similarities and differences between time and space and to show the further similarities between taxonomies and partonomies.

Chapter 11: There are two accounts for the repetition of sympositions across spacetime, differentiated by their disposition towards time. A symposition is episodically well-existent if it comes into the World, and it is dynamically well-existent if stays around in the World for a while when it gets here. Sometimes, there is repetition inside of an individual’s dynamic well-existence, and we can call the internal symposition that repeats a sem-loop. Motivation: to introduce the two forms of well-existence and explain their relationship with time.

11. Episodic and Dynamic Well-existence and Sem-looping

The first chapter asked why things repeat, and we determined that the question was different for physical primitives as for sympositions. I said that “there is something in the nature of certain arrangements of the Dust that make them good at being in the World,” and we can refer to that good-at-being-in-the-World of those “arrangements” as well-existence. Well-existence by construction applies only to sympositions and not to primitives, and although there may be some conceptual cross-over, this book will not explore it.1 There are three types of well-existence; the first two are differentiated by their reckoning of time, and they combine to produce the third, which will be introduced in Part II.

In our discussion of time, we’ve explored the concepts of “parallel” and “serial.” We can attempt to apply these, and we come up with parallel well-existence and serial well-existence. The temporally-reckoned split of well-existence into its two types thus begins to take shape as good-at-being-in-the-World over many places at once versus good-at-being-in-the-World over a series of moments. Both sides of this split retain time as a dimension different from the three spatial ones, but I’d rather apply a distinction one side of which throws all four dimensions together and the other side of which retains the uniqueness of time. This can be done with the “synchronic” versus “diachronic” distinction, which is temporal like the parallel versus serial distinction but in that different sense. In synchronic well-existence, a symposition is good-at-being-in-the-World by occupying many different points in four-dimensional spacetime, and not necessarily all “at once”; in diachronic well-existence, a symposition is good-at-being-in-the-World by stretching out its presence specifically in the temporal dimension over many moments.

I think the term “episode” is useful in interpreting synchronic well-existence. If there are many episodes of something, there are many of it in spacetime, without any regard as to their duration. Thus I call synchronic well-existence episodic well-existence. With a similar intent, a symposition being diachronically well-existent indicates that it is good at withstanding the dynamics of the Universe, and thus I call it dynamic well-existence. If a symposition is episodically well-existent, it is good at springing into the World, and if it is dynamically well-existent, it stays around a while after springing into the World. How often a symposition is encountered in the World by another symposition like you or me depends on both of these well-existences; a symposition that is less episodically well-existent than another may be encountered just as often if it is more dynamically well-existent.

Let’s imagine putting a symposition on an arbitrary potential energy landscape. It will roll around and eventually fall into a potential energy well or valley. If the landscape is pockmarked with wells, it is more likely to fall into one that is nearby than one that is far away. The distillations encompassing those regions of the parameter space at nearby wells are episodically well-existent, and the ones encompassing those far away are not. The histogram over the same parameter space as the potential energy landscape will have modes at the episodically well-existent locations if there is a multiplicity of individuals marauding in analogous parameter spaces. As the Universe developed, the first potential energy landscapes were the nuclear and electromagnetic ones at the lowest levels upon the Dust. The episodically well-existent sympositions were very small nuclei and atoms. Large ones were not episodically well-existent then. Celestial bodies of gravitationally collapsed Dust were also episodically well-existent sympositions, and it is only within the ones massive enough for fusion that larger nuclei also became episodically well-existent.

Imagine in the pockmarked landscape that some of the wells are much deeper than others. Then sympositions that fall into those will remain stuck for a long time, if not indefinitely, making them dynamically well-existent. This may be independent of how far the wells are from the symposition’s initial location in the parameter space. Thus the depth of the wells is another factor affecting the resulting histogram over that parameter space, with deeper wells having more populated modes. In the two-dimensional parameter space for nuclei of mass and charge, or almost equivalently of number of neutrons and number of protons as in Figure 1, there is a ray of dynamic well-existence along the direction where the number of protons equals the number of neutrons.2 The ray is bounded above and below by the “nuclear drip lines” beyond which alpha particles (helium-4 nuclei) or positrons or electrons and their associated neutrinos drip out of the nucleons via the weak force, or beyond which nuclei just fission altogether, repartonomizing the nucleus and making it ever more dynamically well-existent. Further, there is a region far off along the ray known as the “island of stability” whose members are predicted to be dynamically well-existent, but we’re not sure yet since they aren’t episodically well-existent enough to study.


Figure 1. The dynamic well-existence of nuclei. The number of protons is the y-axis and the number of neutrons is the x-axis.

Perhaps I am continuing to commit the error of ignoring the fact that parallel sympositions are also serial. Let’s consider the exact moment an episodically well-existent symposition, Jupiter perhaps, came into the World. If we try to do so, we quickly see that the exercise is rather arbitrary and probably futile; as recently as 1994, the comet Shoemaker-Levy 9 sank into Jupiter, increasing its mass as many others had before it since the primordial nebula of the Solar System. Instead, this exercise once again highlights the fact that the serial symponents of an individual fade in and fade out of the World, and a start and an end to its episode may not be identifiable even if the episode itself clearly is. It may not be clear when a symposition began rolling into a well, unlike whether it did so at all.

Sympositions can be further divided into two classes based on repetition inside their serial partonomy. Frozen sympositions clearly do not have any repetition inside their serial partonomy.3 Unfrozen sympositions with frozen parallel partonomies may, however. Take the Earth-Moon system; its parallel partonomy is frozen as {Earth, Moon}, but every month there is a cycle of poses that repeats itself. I call a sequence of serial symponents that repeats itself a sem-loop and refer to the symponents as being sem-looped or having a sem-loop. Incidentally, the Earth-Moon system is caught in a potential energy well; it is merely cycling around the bottom rather than being stuck there. The Earth-Moon system has a continuous sem-loop, but some sem-loops are discrete, and I will explore that more in Part II.

Since sympositions stack on each other up the levels through the Logos, the ones that are both episodically and dynamically well-existent will be the symponents of those that are well-existent on higher levels. But as with gravity creating nuclear-fusing stars, higher level sympositions can also affect well-existences on lower levels. This is true in general: sympositions in partonomic neighborhoods can modify each other’s well-existences based on their interactions both vertically and horizontally. Sympositions that encourage each other’s well-existence can be called cooperative, and sympositions that inhibit each other’s well-existence can be called competitive. It’s better to ascribe cooperation and competition to the well-existences themselves since they may differ in the episodic and dynamic cases. For instance, supernovae affect the episodic well-existence of large nuclei, but do nothing for their dynamic well-existence.4 When the dynamic well-existence of one symposition cooperates with the episodic well-existence of another, it could be stated in more conventional terms that the former causes the latter, in a sense of “causation” that is continuous from non-causation to causation rather than binary between the two extremes.

The last point is that well-existences don’t mean the same thing for each of individuals, populations, and distillations. The original definitions were inspired by the distilled case. A distillation can have degrees of episodic well-existence based on how many individuals matching it that there are, where those degrees lie on one dimension because counting happens on the one-dimensional number line. A distillation can also have degrees of dynamic well-existence, and those degrees similarly lie on one dimension but instead because time is one-dimensional. An individual, on the other hand, does not have degrees of episodic well-existence but either is or isn’t episodically well-existent, depending on whether it did or did not come into the World. Dynamic well-existence operates the same for individuals as for distillations. For populations, however, to be dynamically well-existent is to continue having new episodically well-existent individuals, even if none of them are relatively dynamically well-existent themselves. For populations to be episodically well-existent is to have a first individual in the population that is episodically well-existent, and is again binary as for individuals unlike distillations. Thus distillations and individuals are alike in the dynamic case, and individuals and populations are alike in the episodic case.


1. An exploration of that would require quantum mechanics. If you’re curious, look up Feynman diagrams.

2. The ray is actually slightly deviated so as to favor slightly higher numbers of neutrons than protons. The deviation would be less if electromagnetism was even weaker compared to the residual strong force than it actually is.

3. Well, there is the moment-by-infinitesimal-moment repetition, but nothing on a higher level in the serial partonomy.

4. It’s also possible that one symposition’s well-existence can affect another’s while not having its own affected it all. Such interactions where only one benefits are commensal and where only one is harmed are amensal. The last case is parasitism, where one is harmed by the well-existence of the other while the other simultaneously benefits from the well-existence of the first.

10. Serial and Parallel Sympositions and Non-Hierarchical Partonomies

So far, I have not explicitly juxtaposed time with space in the context of symposition, and consequently the concept of sem-linking has by default referred to the collecting of symponents in a region of space at some specific moment. Sem-linking does not have to be restricted in such a way and can collect symponents that have spatiotemporal instead of just spatial pose. {volcanic eruption} is a symposition that is not localized at one specific moment, as is {butterfly} with its splendid metamorphosis. There are two things fundamentally different between time and space that complicate the generalization of symposition to the temporal dimension, however. First, note that if space was 2D instead of 3D—a plane instead of a volume—and was similarly occupied by point-like physical primitives, then the symposition of these primitives into a Logos would be basically the same process as if it was 3D. This holds also if space was 4D and again occupied by point-like physical primitives. The problem is that physical primitives are not point-like in our 4D spacetime; they are line-like, tracing extended paths through the dimension of time.

The other difference between time and space is that, given a bunch of points in space, there is no natural ordering to them, they’re just all there at once. On the other hand, given a bunch of points in spacetime, there is a natural ordering to them: the chronological order of first to last point.1 A consequence of this is that when a partonomy is serialized in order to put it into writing, for example with a hydrogen atom: {e, {qu, qu, qd}}, arbitrary choices must be made to put the electron before the proton and then the up-quarks before the down-quark. These choices are not arbitrary with the temporal symponents of a symposition; just put the first one first. Consequently, a clear distinction can be made between serial sympositions, whose symponents can be organized temporally in a series, and parallel sympositions, which are sympositions as originally conceived at a given moment of time whose symponents appear together in parallel.

Now, what is the partonomy of a serial symposition and how might it be different from the partonomy of a parallel symposition, and what happens when these partonomies are put together? First, let’s consider a symposition that for at least some stretch of time has parallel symponents that do not move—our atom of hydrogen, perhaps, at a temperature of absolute zero—and we can say that such a symposition is “frozen.” The self-same parallel configuration of the hydrogen atom is present at each snapshot of time, such that there is a series of {e, {qu, qu, qd}}, {e, {qu, qu, qd}}, {e, {qu, qu, qd}}, etc., that is ordered by time non-arbitrarily. Thus it is also a serial symposition. All frozen parallel sympositions are also serial. How can all of these identical only-parallel partonomies be fit into a single serial+parallel partonomy? There is a range of possibilities with two extremes; we can serially sem-link at the Dust or at the top—either as {{e, e, e, …}, {{qu, qu, qu, …}, {qu, qu, qu, …}, {qd, qd, qd, …}}} or as {{e, {qu, qu, qd}}, {e, {qu, qu, qd}}, {e, {qu, qu, qd}}, …} or in between as {{e, e, e, …}, {{qu, qu, qd}, {qu, qu, qd}, {qu, qu, qd}, …}}. They have, respectively, four, one, and two ellipses (…), two extremes and one intermediate, corresponding to the number of nodes at each level.2

Recall that a parallel partonomic gap, if sufficiently large, results in the level below the gap relating quasi-continuously to the level above the gap. The ellipses in the previous paragraph represent serial partonomic gaps, but how large are they? The fact that primitives are lines in spacetime means that they are continuous and that there are infinitely many points between any two moments of time.3 The serial partonomic gap is so large it’s infinite. Whereas parallel partonomic gaps only approach continuity, serial partonomic gaps can actually get there, and this holds regardless of at which parallel partonomic extreme of bottom or top or where in-between the snapshots are serially sem-linked. So what is the spacetime partonomy of a symposition? A final answer requires choosing a level where the serial sem-linking occurs. Such a choice seems arbitrary, however, at least for frozen sympositions, so I won’t make it, instead preserving the tension.4 Regardless, we can see that serial sem-linking in frozen sympositions is always continuous.

Two questions related to each other arise. Can there be continuously serial sympositions that are not frozen? And what would it take to have a discretely instead of continuously serial symposition, which of course could not be frozen? For a symposition to not be frozen, poses must be changing somewhere in it over time. Recall that a symposition is more than its partonomy. Though frozen sympositions clearly also have frozen partonomies, sympositions with frozen partonomies do not necessarily need to be frozen themselves. Take the Solar System. The partonomy of the Solar System, at least considering only the planets and the major moons, has not changed in millions years. Still, the earth and the sun have completed millions of loops of a continuous sequence of poses since then, and similarly all the other planets and moons. Importantly, if we tried to discretize the sequence of poses, we would have to make arbitrary choices.

When would we be able to make a non-arbitrary choice? Consider a planet that is happily orbiting the Sun until an unfortunate confrontation with another object throws it off course indefinitely into a highly eccentric orbit. This is hypothesized to have actually happened in the history of our Solar System to “Planet Nine,” which started out between Saturn and Uranus but was at some juncture deflected by Saturn towards Jupiter and then by Jupiter into the far reaches of the Solar System to live out its life with an orbital period of 10,000 to 20,000 Earth-years.5 A non-arbitrary division can be made in the temporal series before and after the planet’s deflection. The exact moment of the division may not be easy to determine, but the more important and fortunately easier question is whether there should be a division, and the answer is yes. Regardless, note that the Jumping-Jupiter scenario, as it’s referred to in astrophysics, did not change the parallel partonomy of the Solar System at any moment;6 its partonomic relevance is purely serial.

Imagine instead a spaceship that orbits Earth many times and then changes course and orbits Mars many times. The change in course affects the serial partonomy just like above with a clear before and after, but if we construct the parallel partonomy, we see that it is also affected, unlike above with Planet Nine. It goes from {{Earth, spaceship}, Mars} to {Earth, {spaceship, Mars}}.7 Let’s continue with just the initial letter of each and try to construct a full spacetime partonomy. The parallel partonomies at first are the repeated {{E, s}, M}, {{E, s}, M}, {{E, s}, M}, etc., but in the end are {E, {s, M}}, {E, {s, M}}, {E, {s, M}}. If we serially sem-link at the gravitational Dust, then we get {{{E, E, E, …}, {s, s, s, …}}, {M, M, M, …}} followed by {{E, E, E, …}, {{s, s, s, …}, {M, M, M, …}}}. Recall that choosing the Dusty bottom instead of some other level for continuously serial sem-linking is arbitrary in frozen sympositions and in sympositions with frozen partonomies, such as the Solar System in the Jumping-Jupiter scenario. When partonomies are not frozen, however, the choice is no longer arbitrary. If we try to sem-link our spaceship scenario at the bottom, we would need to be able to put {E, E, E, E, E, E, …}, {s, s, s, s, s, s, …}, and {M, M, M, M, M, M…} into a tree. This cannot be done in such a way that reconciles both the initial and final parallel partonomies. We can sem-link at the top, getting {{{E, E, E, …}, {s, s, s, …}}, M, M, M…}, {E, E, E, …, {s, s, s, …, M, M, M…}}}, but we procure a very odd result if we do so: the intrinsic serial partonomy of Earth is sliced in two by an extrinsic spaceship flying to Mars!

We can solve the problem by an analogy with taxonomies. There were two ways in which a taxonomy could fail to be a tree: by the presence of either continuous or cartesian variation. There are also two ways in which a partonomy can fail to be a tree. The first has already been covered above: continuous sem-linking does not provide a tree, nor even a graph, because both of these are discrete mathematical objects. The second way occurs when symponents belong simultaneously8 to more than one symposition, a situation that can still be depicted by a graph if no longer by a tree. Non-hierarchical partonomies are ubiquitous. They are necessary any time a symponent moves from belonging to one symposition to belonging to another. Though non-hierarchical partonomies are first defined here for temporal reasons, we will see later that they are also relevant spatially.

Figure 1. A non-hierarchical serial partonomy.

Though we need non-hierarchical partonomies to fully account for it, spatiotemporal symposition is as full of hierarchical possibility as just spatial symposition. Consider the spatiotemporal partonomy of an individual {football game} or individual {conference}; they both have many levels above the lowest discrete serial symposition. I’ve mentioned several times that the Logos has developed, grown, or changed after the Big Bang. Given serial sem-linking, however, those expressions are faulty. The Logos spans all of spacetime, planted firmly on both the primordial homogeneity at the Big Bang and the ever-present Dust. The Logos hasn’t developed after the Big Bang, but rather the horizon of its revelation has been expanding.


1. Ignoring relativistic concerns regarding frame of reference. Again, this book is focused on the classical realm.

2. In {e, {qu, qu, qd}}, there is one level between the quarks and the atom, but no levels between the electron and the atom. The in-between serial partonomy could alternatively be depicted as {{{e}, {e}, {e}, …}, {{qu, qu, qd}, {qu, qu, qd}, {qu, qu, qd}, …}}, for clarity. Also, there’s no need distinguish the serial symponents, with subscripts for instance, because of the direct mapping between the seriality of writing and the seriality of time.

3. You may object because of the Planck time. Note that the Planck time does not set a discretization of time, but rather a bound for the measurability of time. Time could go unmeasurably crazy at such tiny scales even while remaining continuous, if it behaved something like the Weierstrass function or the Koch snowflake, perhaps.

4. The tension can be explored further in the endurantism vs. perdurantism literature, although it doesn’t explore the in-between possibility.


6. In all likelihood there was a reorganization of moons, but we are entirely ignorant of the details, so I’ll ignore this possibility.

7. Continuing with the theme of a very pruned Solar System.

8. “Simultaneously” in a timeless, 4D sense, perhaps oxymoronically.

9. Potential Landscapes, Exergy, and Thermodynamics

Recall that gravity is monovalently attractive. If you had two hydrogen atoms far apart in space, then the parameter space of the symposition of the two would have a dimension corresponding to the distance between them. The distance between them also has a corresponding gravitational potential energy associated with it. Farther distances correspond to more potential energy, and closer distances to less. Energy is conserved, however, so as the hydrogen atoms gravitate towards each other, the gravitational potential energy lost due to their getting closer is converted into kinetic energy, making them move with ever greater velocity as they approach each other. Just like a histogram or population density landscape can be constructed over a parameter space, a potential energy landscape can also be constructed over it as well. The parameter space of the symposition of the two hydrogen atoms has a potential landscape that reaches a minimum or valley when the two atoms are in the same place.

But that analysis is strictly gravitational. Atoms have primitives that also interact according to the other forces. The potential landscape from electromagnetism is more complicated, but suffice it say that it does not go to a minimum when the two hydrogen atoms are in the same place, but rather when they are about 7.4×10-11 meters apart. The overall potential landscape from the two forces is simply the sum of the two landscapes, at least until even smaller scales where the nuclear forces also become relevant. Since gravity in this context is overwhelmingly weaker than electromagnetism, the distance between the two atoms is almost completely dictated by electromagnetism.

In general, when primitives are attracted to each other by some force, the potential energy of their symposition is highest when they are separate, and vice-versa when they are repulsed, such as with a pair of electrons. Since the Dust was distributed almost homogeneously at the Big Bang, the Universe was born with an enormous reserve of potential energy in its gravitational form. We can repeat the story from Chapter 2 in terms of energy and potential landscapes. As primitives overcame the Jeans limit and collapsed together, the gravitational potential energy was converted into other forms of energy. First, it was converted into the kinetic energy of the primitives racing together. When they reached a sufficient density, the atoms began colliding violently with each other. The collisions often stripped the electrons from their nuclei, converting the kinetic energy of the atoms into the electromagnetic potential energy of the separation of electrons from their nuclei, along with kinetic energy for the dispersal of electrons and nuclei in their own directions.

With the nuclei stripped of their electrons, electron degeneracy pressure no longer walled the nuclei off from each other. The repulsion between nuclei continued, however, with the electromagnetic component of the potential landscape of two nuclei reaching a peak with the nuclei together and fused. The residual strong force component was the opposite, reaching a deep trough with the nuclei fused. The trough was much more narrow than the electromagnetic peak, however, because the residual strong force only operates at much closer distances. Thus the combined potential landscape from the electromagnetic and strong forces had two valleys, one with the nuclei far apart and one with the nuclei fused, separated by a high potential mountain. You can imagine trying to roll a bowling ball over a hill. With a fast enough roll, the ball would overcome the peak and roll into the valley on the other side; with a roll too slow, it would just go up and then come back down on the original side.1 Nuclei racing together with enough velocity and kinetic energy overcome the barrier of electromagnetic repulsion and enter the valley of residual strong attraction. For nuclei summing to the size of iron, the residual strong valley of fusion is deeper than the unfused valley. When these nuclei fuse, the drop in potential energy from the difference in depth of the valleys is converted to other forms, such as kinetic energy and electromagnetic radiation, or light, which eventually shines into space, illuminating planets like our own.

Ordinary nuclear fusion in stars does not produce elements substantially larger than iron because the depth of the residual strong valley of fusion is overcome by the height of the electromagnetic peak in these elements. The exceptional circumstances for the fusion of elements beyond iron occur during novae and supernovae, stellar explosions, processes which are too complicated to describe in detail here. When elements far beyond iron are created, they are quite often radioactive, with the peaks separating them from their radioactive products being relatively easy to overcome. When they fission or decay and drop back down to elements closer to iron, they release potential energy.

The release of energy from nuclear fission is the primary source of heat within Earth’s core. The gradual cooling of an object can be used to determine how old it is if one has a good estimate for its initial temperature. Lord Kelvin calculated the age of the Earth a century and a half ago using the correct assumption that it originated as molten, but he did not know about nuclear fission, so he underestimated Earth’s 4.5 billion years of age as being near 100 million years. Without radioactivity, Earth’s core would have cooled a long time ago. Once Earth’s core cools, there will no longer be dissipation of energy from it into space. Currently, that dissipation is the primary factor driving the motion of tectonic plates through the convective processes churning in Earth’s mantle. The motion of the plates powers enormous stresses in the solid materials of Earth’s crust. The stresses are carried by distortions in interatomic bonds, such that they are forced out of the comfort of the lowest points of their potential valleys and up the sides of potential mountains. With enough stress, the potential valleys are finally escaped, and the partonomic neighborhoods of the involved atoms can shift quickly in an earthquake. If the earthquake happens under the sea, it can displace massive amounts of water, throwing the sea out of its own potential valley resulting in a tsunami.

Let’s summarize the pathway of potential energy from Big Bang to tsunami. Primordial gravitational potential energy is used to overcome electromagnetic potential barriers to nuclear fusion. Particularly violent events like novae overshoot the lowest potential valley of iron nuclei, storing potential energy in larger radioactive nuclei. The stored nuclear potential energy is released in planet cores through radioactive decay, powering the convection of molten rock and the motion of tectonic plates. The energy from the motion of tectonic plates is stored in stressed interatomic bonds in rock. The electromagnetic potential energy in the stressed bonds is released in earthquakes, where it can differentially raise and lower sea level. The disrupted sea level packs the energy into gravitational potential energy, full circle from the Big Bang, where it dissipates in a series of waves that transfer the energy elsewhere. The overall story is rather simple: primitives want to flow energetically downhill in the landscapes over the parameter spaces of the sympositions they participate in, but they are blocked by many potential barriers. When they do succeed in flowing downhill, they release energy, which in turn interferes with other primitives flowing downhill.

The energy differential between the actual state of a system with its primitives stuck behind potential barriers and the state of the system where all of the primitives have tunneled through their barriers and into their lowest valleys is called exergy. About 5.9 million years ago, the Strait of Gibraltar closed, and the Mediterranean Sea dried up leaving a string of very salty lakes along its floor. This created a large differential in the sea levels of the terrestrial ocean and of the shriveled Mediterranean. You can imagine being an entrepreneur and building a mill on the closed strait, together with some canals to carry oceanic water over a mill-wheel that dumps into the Mediterranean basin. Unfortunately, after about 0.6 million years of hounding investors, your venture didn’t get funded because the strait reopened, triggering the Zanclean Deluge of the Mediterranean basin.2 Where there is a differential in the height of the neighboring bodies of water, a mill can be built to extract the exergy of the falling water. If there is no differential, a useful mill cannot be built. But even without the differential, there is still gravitational potential energy because the water on Earth hasn’t collapsed to a black hole.

It is interesting to ask how much of the energy right after the Big Bang was exergy. Apparently quite a bit of it was, and it would be fun if exactly all of it was, but this is an open question in physics. Regardless, we can look into the World and try to find the distribution of exergy. We have already discussed at length the exergy in simple gravitational, chemical, and nuclear systems. A system also has exergy whenever its partonomic neighborhood includes two or more sympositions at different temperatures; the hotter ones have exergy that can be extracted by letting the heat energy flow into the cooler ones. When they all reach the same temperature, no more exergy can be extracted. Aside from these examples, exergy resides in a complicated tangle of volume, pressure, temperature, and other factors, which are studied by the science of thermodynamics. We can see that the Sun harbors an enormous amount of exergy, as does Earth’s core. A simple way to find exergy is to find those sympositions that radiate heat; life forms do, as do electronic devices, automatic machines, and our homes in winter.

When exergy decreases, where does it go? Where did it go when the Atlantic refilled the Mediterranean? The event was probably very loud, so much of the exergy ended up as sound, which is just the coordinated oscillations of interatomic bond lengths of atoms and molecules of air as they are periodically displaced to and fro at the bottom of their potential valleys. These oscillations are also called “phonons.” Incidentally, heat is exactly as the same thing, except it includes all potential valleys, not just the ones of bond length, such as those from bond angles and rotations. Exergy is lost as it diffuses into the maze of partonomic neighborhoods in the Logos, where no symposition can coordinate a regathering of it all. On the surface of the Earth, the potential foothills that it gets lost among are almost always dependent upon electromagnetism, and the last destination for it is as electromagnetic radiation into space.

Finally, it’s worth addressing the relationship between symmetry and energy. Many of you may understand symmetry as being a rough measure of order. When energy is added to a system or partonomic neighborhood, its sympositions jostle about more, and thus one could expect its order to decrease with the addition of the energy. At a first pass, then, it seems that we could expect symmetry to decrease when energy increases. Enter barium titanate, BaTiO3. Barium titanate is a crystalline compound with interesting electromagnetic properties like photorefractivity and piezoelectricity that melts (or freezes) at 1625 ºC. If you freeze it through the melting point down to room temperature, it passes through a sequence of solid phases3 that have, in order, hexagonal, cubic, tetragonal, orthorhombic, and rhombohedral crystal structure, where each is a variant on the same unit cell symposition of barium, titanium, and oxygen ions. This sequence arranged by decreasing temperature in fact proceeds from more symmetry to less.

image2Figure 1. The unit cell of BaTiO3, at a variety of temperatures, with the dielectric constant plotted. Red spheres are oxygen ions, green barium ions, and blue titanium ions.

A parameter space for the unit cell sympositions for each phase can be constructed whose dimensions measure bond lengths and bond angles between adjacent ions. A small chunk of barium titanate has far more than trillions of ions, so the histogram over the parameter space has similarly many datapoints. Each phase has a different histogram. The hottest phase has the broadest modes, since the individuals are jostling about so much and can’t decide quite where to be, and conversely the coolest phase has the narrowest modes. As the small chunk is cooled, a given mode gets narrower and narrower until a thermodynamic breaking point where it fractures into several pieces. At a high temperature, the titanium ion bounces all around the center of the unit cell, but on average, it is exactly in the middle; at low temperature, the titanium ion picks a direction and shifts off-center, breaking the symmetry with the other ions in the unit cell. Thus the symmetry is broken at the level of interatomic poses, but it is also broken on a higher level: the off-center shift propagates from one unit cell to the next, until it reaches another wave of propagation that decided to shift in another one of the possible directions. If we assume that the small chunk started out at the higher temperature as monocrystalline, that is as a single perfect repeating lattice all throughout, then it can easily end up partitioned into multiple crystal grains, probably separated by twin boundaries.4 This partitioning entails the insertion of a level into the partonomy.

We can see that the association between energy and symmetry is the opposite from what one might expect: partonomic neighborhoods with more energy have more symmetry, not less. At the Big Bang, all partonomic neighborhoods were in enormously high energy states, and these correspond to the known homogeneity of the quark-gluon plasma. What happened before the quark-gluon plasma kindles current research into theories of supersymmetry. As the Universe has embarked on lower energy states after the Big Bang,  a cascade of broken symmetries in many partonomic neighborhoods has resulted in a fracturing of populations leading directly to the infilling of partonomic gaps with more levels and the growth of the Logos upon the Dust.


1. A more realistic illustration accounts for the fact that it’s more like two balls participating together in the construction of a hill to divide them.

2. Dramatized by xkcd:

3. In general, a solid of a given compound can have many phases at different temperatures (and pressures), but the compound will have only one liquid phase and only one gaseous phase.


8. Taxonomies, Variation, and Broken Symmetries

The fundamental building block of a partonomy is the relation “part of,” and it links vertically from a lower level to a higher level in a partonomic neighborhood. Sympositions and their symponents do not often belong to the same population because they are rarely particularly similar;1 thus, a population generally resides on one, given level of a partonomy. Consequently, distillations of populations do have specific vertical localizations in the Logos even though they do not have specific horizontal, i.e. quasi-spatial, localizations like individuals and populations. Similarity, however, is obviously a matter of degree, so a population can be subdivided into subpopulations expressing greater similarity or collected into superpopulations expressing less. Subpopulations and superpopulations can both be distilled, and they can be linked by the horizontal relation “type of” into a hierarchical structure similar to a partonomy. Such a structure is a taxonomy. For instance, with the population of mammals, one can take the subpopulation of primates and the superpopulation of vertebrates. Primates are a type of mammal, and mammals are a type of vertebrate, and these relations extend within the very large and hierarchical Linnaean taxonomy of the “tree of life,” which is both a colorful metaphor and a precise mathematical expression.2

This analysis suggests that taxonomies are always trees. There are two ways in which taxonomies can fail to be trees. Recall the population of raindrops from Figure 1 in the last chapter. Take a raindrop with -30 microstatcoulombs of charge and build a hierarchy of subpopulations that allow for increasingly more variation from it. We can specify the subpopulation of raindrops with -60 to 0 mstatC of charge, and then incorporating that one in a bigger one we can specify the subpopulation of raindrops with -90 to 30 mstatC. We can, however, select a different raindrop with, say, -40 mstatC of charge and build a different hierarchy of subpopulations from it: -70 to -10 mstatC and then -100 to 20 mstatC. These latter sets neither contain nor are able to be contained by their former counterparts. So we have two different and irreconcilable possibilities for the hierarchy of subpopulations of raindrops, neither of which seems ‘better’ in any naïve sense.

The problem, of course, is that I’m forcing hierarchical structure onto the distribution of raindrops when it simply isn’t there. This is true in general for populations with one mode, and we can say that such populations exhibit continuous variation because there is a continuum of possibilities for the relevant parameter with no non-arbitrary location to divide it. Populations that can be divided and for which the divisions can be organized hierarchically exhibit linnaean variation, as the Linnaean taxonomy is the prime example. If you imagine the modes of a population with linnaean variation as being primitives that can be sem-linked, then the partonomy of its histogram is its taxonomy! Once again there is a strong analogy between physical space and parameter spaces, and there is a likeness between the vertical (“part of”) and horizontal (“type of”) orientations in the Logos.

We have only considered a one-dimensional parameter space for the raindrops, that of the charge; we could consider two or more dimensions simultaneously, such as both charge and mass. In that more general case, the raindrops could be said to exhibit multivariate continuous variation, contrasting with the univariate continuous variation over a one-dimensional parameter space. It’s possible that the values of the data points in two or more dimensions of a parameter space are correlated with each other. Raindrops with a greater charge may generally have more mass, but perhaps not in a perfectly predictable way. Thus charge and mass could be correlated in raindrops, but without being interchangeable measures. This correlation would depopulate certain areas of the histogram, but without changing the number of modes. This can be called interdependent multivariate continuous variation, in contrast with independent multivariate continuous variation when there is no correlation. By necessity, all independent and interdependent variation is multivariate, because you need at least two dimensions to have a correlation, so the “multivariate” can be dropped, e.g. as just independent or interdependent continuous variation.


Figure 1. Independent and interdependent continuous variation. On the right, the NW and SE parts of the distribution are slightly underpopulated, and the NE and SW parts of the distribution are slightly overpopulated, making the two parameters correlated.

Linnaean variation is a special case of discrete variation, which pertains to those populations that have more than one mode. Every population exhibits exactly one of continuous or discrete variation, given some parameter space like the natural one, since the number of modes is either one or more than one.3 In discrete variation that is not linnaean, the modes cannot be sem-linked together to create a single hierarchical tree. For instance, imagine a population of pea plants that can either be short or tall, depending on some gene with two forms or “alleles,” and can either have constricted or full pods, depending on some other gene with two alleles. Then the individuals have four ways for combining the two characteristics: they can be short with constricted pods, short with full pods, tall with constricted pods, or tall with full pods. The individuals can be plotted in a two-dimensional parameter space of plant height and pod volume. Within this parameter space there will be four modes, each having some small amount of spread. This would be an example of multivariate discrete variation.

The question then arises as to which two pairs of modes should be sem-linked first. Should we have {{{tall and full}d, {tall and constricted}d}d, {{short and full}d, {short and constricted}d}d}d or {{{tall and full}d, {short and full}d}d, {{tall and constricted}d, {short and constricted}d}d}d? Neither of these is more forthcoming than the other unless one of height or pod shape is artificially considered primary and the other secondary. Consequently, the taxonomy of the population is not a hierarchical tree, but it is instead a combinatorial crossing between all the possibilities of each allelic modularity. This is basically the same concept as the cartesian product in set theory, so I call such variation cartesian variation. Cartesian variation can be either independent or interdependent, for instance if a pea plant being tall makes it more likely or less likely to have full pods, rather than being neutral.

In sum, there is continuous variation for distributions with one mode and discrete variation for distributions with more than one mode. Discrete variation is perfected in either linnaean or cartesian variation, which are mutually exclusive. Moreover, all of these varieties of variation can be fit into a single framework. Let’s take another detour through math to demonstrate. Pascal’s triangle is a very straightforward mathematical object that is constructed as follows: take a blank sheet of paper and write a “1” in the top center. This will be the first line. Now put zeros on both sides of the one: “0 1 0.” Now for every adjacent pair of numbers on the top line, write their sum in between them on the second line. The first pair, “0 1,” adds to 1, and the second pair, “1 0,” also adds to 1: so the second line is “1 1.” Once again put zeros on both sides of the second line, “0 1 1 0,” and construct the third line in the exact same way: “1 2 1.” The fourth line is “1 3 3 1,” the fifth “1 4 6 4 1,” the sixth “1 5 10 10 5 1,” etc.


Figure 2. Pascal’s triangle. You can imagine zeros along the outside.

If you add all the numbers in a line, you get 1 for the first line, 2 for the second, 4, 8, 16, and the subsequent powers of 2. This doubling happens because each number in each line manifests twice in the sums below it. We can imagine averaging each pair of numbers instead of adding them. Doing so would make each line sum to exactly 1 and would give us Pascal’s normalized triangle. The nth line of Pascal’s normalized triangle is, among other things, the distribution of probabilities for n coin tosses that come up the same as the first toss, also known as the binomial distribution. Each normalized line can thus be considered as a probability density function, which is an expression of what the histogram over a parameter space is likely to be. As n gets larger and larger the resulting series of numbers in line n of Pascal’s triangle gets closer and closer to approximating a curve called a gaussian, popularly known as the “bell curve.” Since every line of Pascal’s triangle approximates a gaussian and every line is a split and shifted and added copy of the one above it, it follows that gaussians retain their shape under the operation of “splitting-shifting-adding”—and for the normalized triangle more appropriate for probability distributions—“splitting-shifting-averaging.”

Imagine instead of starting with “1” at the top of the page, we started with “1 0 0 1.” I don’t believe there is any sense in which “1 0 0 1” approximates a gaussian. Our subsequent lines would be “1 1 0 1 1,” “1 2 1 1 2 1”; but after that, it’s stops being so trivial: “1 3 3 2 3 3 1,” “1 4 6 5 5 6 4 1,” “1 5 10 11 10 11 10 5 1,” “1 6 15 21 21 21 21 15 6 1.” The last computed line no longer has a dip in the middle. After several iterations, it is now approximating a gaussian! Or at least it now has one mode. In fact, it is irrelevant what string of numbers you put at the top of your page, as long as it’s finite; any string will eventually become monomodal and after that start approximating a gaussian at some line, although your paperage may vary.4

We can reverse the process and split a single mode by over-shifting. Say we’re at the third line of the triangle: “1 2 1.” We split: “1 2 1,” “1 2 1.” We should get to “1 3 3 1” after shifting and adding but we shift too much: “1 2 1 0 0 0 0 1 2 1.” If we keep iterating, we’ll eventually recuperate a gaussian, but for now we’re experiencing a setback. Shifting too much doubled the number of modes. Splitting-shifting-adding, or what we can call broken symmetries, can account for both discrete and continuous variation, depending on how big the shifts are. Symmetry belongs to subpopulations that are alike, and it is broken when a factor is introduced that differentiates them. In plants, there is usually a whole array of genes that can affect height. If each of these genes comes in a taller and a shorter allele, then whether the resulting distribution is monomodal or multimodal depends on the profile of overlap between the “shifts” corresponding to the difference in height of a pair of alleles of the same gene. If all of these are small and of comparable size, then the population will have one mode, but if one of them is substantially bigger than all the rest, then it will have two modes.

These considerations are all of one-dimensional parameter spaces, however. This results from the two-dimensionality of the paper used to construct Pascal’s triangle, one of whose dimensions is taken for the splitting-shifting-adding operation. Imagine replicating the sheet of paper with its possibly mutant triangle, stacking the copies, and splitting-shifting-adding through the sheets of paper simultaneously as across them. This can theoretically be done in even higher dimensions, accounting for parameter spaces of arbitrary dimensionality. If the introduction of broken symmetries adding new dimensions is always done to the entirety of the distribution, then cartesian variation is accounted for. If the addition of a new dimension is done to one mode at a time, then linnaean variation is accounted for. Whether mode-multiplying factors are applied always to all modes together or never to more than one mode is the factor differentiating cartesian and linnaean variation. Intermediate possibilities can be procured, and these would deliver mixtures of linnaean and cartesian variation.5 Finally, there’s a distinct similarity between independent continuous variation and independent cartesian variation. Both of these are created by many independent symmetry-breaking factors, except that in the independent cartesian case, there are a few factors that have much larger shifts than the rest, with the distribution of these factors being, in effect, multimodal.

I have developed a taxonomy for the population of populations based on the characteristics of populations’ distributions. We can ask what variation this taxonomy expresses. A distribution is either continuous or discrete. This variation is discrete. Both discrete and continuous distributions can either be univariate or multivariate; this variation is cartesian. Only multivariate distributions can be interdependent or independent; this variation is linnaean. I haven’t spent much time on this, but monomodal distributions can be gaussian or logistic or otherwise depending on a large family of parameters used as multipliers or exponents or otherwise in the equations defining many types of distributions.6 The parameter space of distributions does not have a specific dimensionality since not every distribution uses every parameter, but the framework of this chapter is sufficiently flexible to account for this and many other polymorphisms of the distribution of distributions.


1. Sympositions similar to their symponents would be similar to fractals. Sympositions like mountainscapes and shorelines would be well-known examples of this.

2. “Tree” was defined in Chapter 3 as a connected graph with no loops in it.

3. The number of modes isn’t always clear, but that’s what statistical techniques are for.

4. This is also related to the fact that the limit of n-fold autoconvolution is always a gaussian.

5. There’s one specific mixture of linnaean and cartesian variation that I would like to see implemented. There are two popular ways to organize email archives. One is to put individual emails in folders, which can then be put into folders, into folders, etc. The other is to create labels and apply them to whichever individual emails are relevant to that label, often with more than one label per email. The folder strategy recapitulates linnaean variation and the label strategy cartesian variation. I would love to have a system where labels are applied to emails but where labels are placed into a folder hierarchy. By never applying more than one label per email or never using more than one folder, either system can be recovered, but the mixed system broaches basically all my use cases.

6. See See also my free iOS app Sympose It on the Apple Store for an interactive implementation of the concept of symposition applied to mathematical equations.

7. Parameter Spaces, Histograms, and Modes

Say I’m trying to characterize a population. One of the best ways to do so is to take the same numeric measurement or measurements from every individual in the population and make a scatterplot of all the data points, to see how they distribute themselves. If we only take one measurement from each individual, then our scatterplot is just a bunch of points on the standard number line. If we take two measurements from each, then our scatterplot is a bunch of points on the standard Cartesian plane. If we take three or more, then our scatterplot is a bunch of points in 3D-space, 4D-space, or higher. The 1-, 2-, 3-, 4-, etc., dimensional space that we plot our points in is the parameter space of our characterization, and each dimension is a parameter.1 The measurement can belong to the symposition as a whole, or to the symponents, or to the poses between them, or sometimes even abstruse mixes of these or of these and the measuring mechanism. Consequently, a symposition with more symponents will have more potential measurements and thus a higher dimensional parameter space. Ultimately though, the choice of measurements and thus the dimensionality of the parameter space is up to the measurer, but the enumeration of symponents and poses, both at the top levels and down to the substrate, provide a natural parameter space. If I mention a symposition’s parameter space without specifying exactly how it was chosen, then you can assume I mean that natural one that does not include abstruse mixes.

If we add one dimension to our parameter space, we can construct a histogram. A histogram carves up a parameter space into a number of discrete parcels or bins and depicts with the additional dimension how many individuals are plotted in that parcel or bin. A histogram effectively shows the local density of individuals in a subregion of the parameter space. You could call it a “population density landscape,” with peaks in the landscape corresponding to high density. These peaks are called modes. The number of peaks in a histogram depends on both the choice of parameter space and the choice of bins. Consider the population of raindrops that fall during a storm. For reasons related to lightning, raindrops usually carry a small amount of net static electric charge, either positive or negative. If negative, that means there is a tiny fraction of a percent more electrons in the drop than protons; and if positive, a tiny fraction of a percent less. Figure 1 shows a 2-dimensional histogram for raindrops built on a 1-dimensional parameter space with the x-axis being the parameter of charge and the y-axis being a scaled count of raindrops with such charge.


Figure 1. Histogram for the charge of raindrops with a diameter of 1.0 to 1.2 mm. ESU = statcoulombs.

The raindrop histogram has one clear mode in the middle. Figure 2 plots stars according to their temperature/color and their luminosity/absolute magnitude. More massive stars tend to be brighter, but even though mass might be a more natural choice of parameter than the brightness, there’s no straightforward way to measure the mass of a star from Earth. The parameter space in Figure 2 is 2-dimensional, and though the additional dimension for the histogram is missing, you can imagine it very easily. The histogram would have at least three modes corresponding to white dwarfs, main sequence starts, and giants, with a possible fourth mode for supergiants if they aren’t just a tail off the distribution of giants.


Figure 2. Hertzsprung-Russell diagram.

Histograms and parameter spaces take advantage of our natural abilities to visualize and understand physical space. You can flip the conceptual connection and imagine the three ordinary dimensions of space as being a parameter space, with the Dust as datapoints. A histogram constructed from this space as it encompasses the whole Universe would just correspond to the distribution of matter throughout it. In the state immediately after the Big Bang, there was exactly one mode in the distribution, as matter was distributed evenly throughout the hot quark-gluon plasma. Today, matter is quite clumped into many modes corresponding to superclusters, galaxies, planets, rocks, chemicals, etc. The evolving Logos has split the original mode into many modes, each of which sem-links its own many modes contained within. This hierarchy of modes of distribution of matter is, at a rough pass equating position with pose, the partonomy of the Logos, which has been shaped at its top levels by gravity and the bottom levels by the other forces.

But the modes and modes-within-modes of distribution of physical primitives lie strictly within the 3-dimensional “parameter space” of conventional physical space. Any individual symposition is a hierarchy of symponents posed together. If there is a natural parameter space for any symposition, and the dimensions of that space reflect the symposition itself and its symponents and poses, then there is a hierarchy of parameter spaces corresponding to the hierarchy of sympositions where some or all of the dimensions of one space are used in the construction of the dimensions of the space above it. The Dust is constrained to the parameter space of conventional space, but sympositions live in the nearly unbounded parameter space of possibility in the Logos.


1. It’s also called a configuration space and is conceptually similar to feature, phase, and state spaces