Working Paper Submitted for review Interdisciplinary Perspective

The Monoculture Problem

Signal Diversity, Compression Failure, and the Systematic Degradation of Human Adaptive Systems

John Ricketts1

1 AI+Wellbeing Institute & International College of Liberal Arts, Yamanashi Gakuin University, Kofu 400-8575, Japan.
In collaboration with the University of Tokyo and University of Melbourne.
Correspondence: ai-well-being.com
signal diversity rate-distortion theory complex adaptive systems monoculture neurochemical range wellbeing economics complexity economics governance attention economy
Abstract

Complex adaptive systems — nervous systems, memory consolidation processes, language communities, economies, and governance institutions — evolved or developed to function in environments of high signal diversity. Their internal compression mechanisms require that diversity not as background noise but as the raw material through which meaningful structure is identified by contrast. We argue that late modernity is systematically narrowing signal diversity at five scales of human organisation through a structurally similar mechanism at each scale: the replacement of rich, varied input distributions with narrow, optimisation-selected ones; the degradation of the shared representational frameworks that compression efficiency depends on; and the progressive suppression of the interaction structure in which adaptive properties are located. We develop a unified theoretical framework grounded in rate-distortion theory and show that this pattern — which we term monoculture — produces a consistent damage profile across levels: reduced adaptive range, progressive brittleness, and emergent pathology that is difficult to detect from within the optimising framework that produces it. Claims vary in evidential status across levels; the framework generates specific, falsifiable predictions at each level that the paper makes explicit. The levels are connected through a causal architecture that the paper specifies and marks according to evidential confidence. The shared solution type — restore signal diversity to the level each compression mechanism requires — is proposed as a design principle and research orientation rather than a policy prescription.

---

Introduction

In 1970, the world lost approximately fifteen percent of its corn crop to a fungal pathogen, Helminthosporium maydis, that had never previously caused widespread damage. The outbreak was swift, devastating, and, in retrospect, entirely predictable. American agriculture had spent the preceding decades replacing genetic diversity with efficiency: a single high-yield hybrid, Texas male-sterile cytoplasm, had come to account for eighty percent of the nation's corn seed. The pathogen found no resistance because the crop offered no variation. The disaster was not caused by the fungus. It was caused by the monoculture.

This paper borrows that concept and extends it. Its central claim is that late modernity is enacting a structurally analogous impoverishment at five distinct levels of human organisation simultaneously — neurochemical, cognitive, linguistic, economic, and institutional — and that this impoverishment follows from a common mechanism: the systematic narrowing of signal diversity in systems whose healthy functioning depends on that diversity. The five levels are not metaphors of the 1970 corn blight. They are instances of the same class of failure, operating through the same formal logic, producing the same type of damage: reduced resilience, emergent pathology, and dysfunction that cannot be diagnosed from within the optimising framework that produced it.

The argument is not that complexity is always better than simplicity. Simplification is cognitively and socially indispensable. The argument is narrower and more precise: that certain classes of complex adaptive system — nervous systems, sleeping brains, language communities, economies, governance institutions — evolved or developed to function in environments of high signal diversity, and that their internal compression mechanisms require that diversity not as an option but as a substrate. When that substrate is impoverished, the compression fails. The failure appears first as reduced adaptive capacity, then as pathology, then as system-level dysfunction. At each of the five levels, late modernity's optimising logic is producing exactly this sequence.

The unifying theoretical framework comes from information theory, specifically from rate-distortion theory in the tradition of Shannon (1948) and Berger (1971). That framework is introduced in full in the following section. What it permits, applied across the five levels, is a precise account of why signal diversity matters: not because diversity is intrinsically valuable, but because efficient compression — the core function of every adaptive system considered here — is impossible without it. The monoculture problem is, at bottom, a compression failure. Once this is seen clearly, both the damage and the remedy become tractable in ways they are not from within the usual disciplinary frameworks.

What needs to be explained

Any adequate account of the current state of human wellbeing must grapple with a cluster of anomalies that standard frameworks handle poorly.

The first is the Easterlin paradox: across affluent societies, average life satisfaction has remained broadly flat for decades while material capacity has risen dramatically. Standard welfare economics predicts the opposite. The anomaly is usually addressed by invoking hedonic adaptation, relative income effects, or the limits of subjective wellbeing measures — but these are descriptions of the pattern, not explanations of the mechanism.

The second is the simultaneous global rise in anxiety, depression, loneliness, chronic inflammatory illness, sleep disruption, and attentional fragmentation across populations whose material circumstances are historically unprecedented. These are not independent trends. They are correlated in time, correlated in population, and concentrated in exactly those environments most thoroughly penetrated by late modernity's optimising infrastructure: high-income digital environments. Standard frameworks treat them as separate clinical or behavioural problems. They are not separate.

The third is the increasing brittleness of governance and institutional response to novel challenges. Pandemic mismanagement, climate policy failure, and the proliferation of AI-enabled harms share a structural signature: institutions optimising for measurable indicators failed to detect or respond to signals that were not in their metric set until the consequences were severe. This is not primarily a failure of intelligence or political will. It is a failure of signal architecture.

The fourth is what might be called the LLM paradox: large language models trained on the accumulated record of human communication produce outputs that are locally coherent but globally shallow, confident but unreliable, fluent but epistemically impoverished. Standard accounts attribute this to training data quality or model scale. We will argue, following the rate-distortion account of language developed in the institute's prior work, that the pattern is predictable from the absence of shared world — the LLM produces language without the compression infrastructure that shared context provides, and the result is exactly the distortion profile that rate-distortion theory would predict.

These four anomalies share a structure. In each case, a system optimised for narrow, measurable throughput is producing emergent pathology in the domain of signal diversity and adaptive compression. This paper argues that they are instances of a single problem and offers a unified framework for understanding why.

The five-level architecture

The paper proceeds through five scales of analysis, each constituting an independent argument and each contributing to a cross-scale synthesis.

Level 1 — Neurochemical. The human neuromodulatory system comprises over one hundred identified signalling molecules. The dominant stimulus environment of late modernity is plausibly selective in its activation: variable-ratio reinforcement architectures in attention technology, chronic low-grade threat from social comparison and financial precarity, and the brief affiliative warmth of parasocial and digital contact each target identifiable physiological systems — dopamine-mediated anticipation, the cortisol stress axis, and acute oxytocin-mediated bonding, respectively. The argument that these are over-represented relative to the rest of the neuromodulatory architecture is a theoretical claim supported by functional and pharmacological inference rather than direct comparative measurement; it should be read as a framework hypothesis rather than an established finding. What is better supported is the dynorphin counterregulatory mechanism: preclinical and pharmacological evidence indicates that high-frequency dopaminergic activation produces kappa-opioid rebound that reduces hedonic baseline. We propose the construct structural anhedonia — a chronic background flatness at the population level, distinct from clinical depression, produced by sustained reward cycling — as a testable prediction of this mechanism applied at scale. We term the broader pattern neurochemical monoculture to name the hypothesised mismatch between the range of activation states the human neuromodulatory system evolved to traverse and the range that contemporary stimulus environments actually provide.

Level 2 — Cognitive/sleep. The sleeping brain performs active compression: a graph distillation operation that prunes weakly connected experience traces while preserving the strongly connected and structurally central. Formal analysis shows this is isomorphic to rate-distortion optimisation under resource constraints. The process is input-dependent: the compression algorithm requires varied, semantically rich waking experience to identify what is structurally important by contrast. When waking input is homogeneous — when the same emotional registers, the same attentional modes, the same stimulation patterns recur with minimal variation — the compression problem degrades. Sleep cannot distill meaning from impoverished signal.

Level 3 — Linguistic/social. Language compression depends on shared context. The more common ground two interlocutors share — overlapping experience, expertise, culture, history — the lower the encoding rate required to transmit a given meaning within acceptable distortion. This is a formal result, not a metaphor: it follows directly from the rate-distortion formalism developed in the institute's prior work on language. Late modernity fragments shared worlds through filter bubbles, platform-mediated communication, social sorting, and the replacement of sustained community with brief transactional contact. When shared worlds thin, the compression infrastructure of language degrades, and communication becomes simultaneously more effortful and less faithful to its intended meaning.

Level 4 — Economic. GDP monoculture treats aggregate production as the measure of social progress, compressing the full dimensionality of human flourishing into a single scalar. This is not merely a measurement problem. It is a compression failure in the technical sense: the codebook used to represent the economy destroys exactly the information — relational, compositional, ecological, distributional — that drives actual wellbeing. The institute's Wellbeing Observatory documents the empirical consequences across 54 years and dozens of countries: wellbeing efficiency diverges dramatically from wealth, and the divergence is systematic, not random. The complexity economics critique of DSGE modelling provides the mechanism: systems that exhibit emergent properties at the macro level cannot be understood by optimising their components' individual utility functions, because the relevant causal structure lives in the interaction terms, not the individual terms.

Level 5 — Institutional/governance. Static, prediction-based governance optimises toward measurable targets in the belief that the system can be steered to a better equilibrium. This fails precisely when the system is a complex adaptive system in a genuinely uncertain environment, because the metric set is always a small subset of the system's actual signal space. Policy apparatus running on GDP, crime rates, test scores, and engagement metrics is blind to exactly those signals — relational, cultural, ecological, informal — that complex adaptive systems depend on for adaptive response. The result is not merely sub-optimal policy. It is a governance monoculture that systematically degrades the signal diversity required for institutional resilience.

The argument in one sentence

Late modernity is systematically narrowing the signal diversity that complex adaptive systems require for healthy compression — and the damage this produces is visible, cross-level, and worsening.


Theoretical Framework: Rate-Distortion Theory and the Compression Requirement

The compression insight

Every adaptive system faces the same fundamental problem: the world contains more information than the system can encode, store, or act on. The solution, evolved or designed, is compression. Nervous systems compress sensory experience into actionable representations. Sleeping brains compress the day's experience traces into durable memory structures. Languages compress intended meanings into acoustic signals. Economies compress the full space of social activity into prices and quantities. Institutions compress complex social realities into actionable policy targets.

Compression is therefore not a failure mode or a simplification to be regretted. It is the core function of adaptive systems. The question is not whether to compress but how — and this is where the theory becomes critical.

Rate-distortion theory: the essentials

Rate-distortion theory, developed by Claude Shannon (1948) and formalised by Toby Berger (1971), addresses the fundamental trade-off in any compression system between rate — the amount of channel capacity (energy, bandwidth, time, cognitive load) consumed in transmitting or storing information — and distortion — the degree to which the compressed representation deviates from what was being represented.

The rate-distortion function R(D) specifies the minimum rate at which a source can be encoded while maintaining distortion at or below level D. This function has a shape that encodes a fundamental insight: as you tolerate more distortion, you can compress more aggressively (lower rate); as you demand greater fidelity to the original, you must use more channel capacity. There is no free lunch, but there are better and worse strategies for managing the trade-off.

For the present argument, three properties of the rate-distortion framework matter most.

First: optimal compression requires a rich, diverse input. A degenerate input — one in which signals cluster in a small region of the source space — cannot be efficiently compressed in the meaningful sense. The compression algorithm cannot learn to discriminate what is signal from what is noise, because everything looks like signal. More precisely: the algorithm's ability to identify what is structurally important, and therefore worth preserving, depends on contrast — on the presence of variation from which importance can be inferred. When input diversity collapses, so does the compression system's ability to distinguish meaningful structure from background.

This is the direct formal basis of the monoculture problem at levels 1 and 2. The nervous system's neuromodulatory architecture evolved as a compression mechanism for the full range of experiential signals available in ancestral environments. The sleeping brain's graph distillation operator requires varied waking experience to identify centrality — to know which nodes are worth preserving — because centrality is defined relative to the distribution of all encountered material. Impoverish the input, and the compression degrades.

Second: efficient compression requires shared codebooks. Rate-distortion theory applies not just to single-system compression but to communication between systems. When two systems (speaker and listener, economy and analyst, institution and citizen) must share compressed representations, the efficiency of compression depends critically on their sharing a common representational framework — a codebook. In Shannon's original formulation, the channel capacity available to a communication system depends on both the channel itself and the match between encoder and decoder. When encoder and decoder share a rich common codebook — a shared world, a shared model of how the system works — they can transmit more meaning per unit of channel capacity. When the codebook diverges, encoding cost rises and distortion increases for a given rate.

This is the formal basis of the monoculture problem at levels 3 and 5. The institute's work on language (Ricketts, 2026) demonstrates formally that common ground functions as compression machinery: conditioning on shared context reduces the mutual information that must be explicitly encoded in the signal, enabling the same meaning to be transmitted at lower rate and lower distortion. When shared worlds thin, encoding cost rises. When institutions and the populations they govern operate from different models of what matters — different distortion functions, in the technical language — the compression infrastructure of governance degrades.

Third: compression failure produces a characteristic damage profile. When a compression system operates on impoverished input or with degraded codebooks, the damage does not appear as uniform degradation. It appears in a specific pattern: the system continues to function locally — it can process the signals it does receive — but loses the capacity to process novel signals, to adapt to changes in the source distribution, or to detect anomalies that fall outside its impoverished codebook. In ecology, this is monoculture vulnerability: the crop continues to grow until the pathogen finds it. In neuroscience, this is structural anhedonia: the reward system continues to fire until habituation reduces baseline to flatness. In governance, this is metric fixation: the institution continues to optimise until a crisis arrives that is invisible to its metric set.

The characteristic signature of compression failure is therefore not immediate collapse but progressive brittleness — reduced range of response, reduced novelty tolerance, and eventually, catastrophic failure to signals that the system's codebook was never built to handle.

The monoculture operator

We can now define the monoculture problem with precision.

Let S be an adaptive system with compression mechanism C, operating on input distribution P(x) from source space X, with codebook K, producing compressed representations R(x) subject to distortion bound D.

The monoculture problem occurs when the effective source distribution P_eff(x) contracts — when the range of signals actually presented to the system narrows relative to the range the compression mechanism was designed for. This contraction can occur through two channels:

Input impoverishment: The signal environment narrows, reducing the diversity of inputs reaching the system. At the neurochemical level, this is the narrowing of the stimulus environment to a few high-frequency signal types. At the sleep level, this is the homogenisation of waking experience. At the linguistic level, this is filter bubble fragmentation.

Codebook degradation: The shared representational frameworks that compression depends on erode. At the linguistic level, this is the thinning of shared worlds. At the economic level, this is the replacement of multi-dimensional social reality with GDP. At the governance level, this is metric fixation.

In both cases, the compression system's effective capacity — its ability to maintain low distortion at a given rate — declines not because the system itself has been damaged, but because the conditions its design assumed have been removed. The system continues to function, optimising within its impoverished input regime, generating outputs that appear locally coherent but are globally misrepresentative.

This is why the pathology cannot be seen from within the optimising framework. An institution optimising for its metric set will appear to be performing well by that metric, up to the point at which the unrepresented signals produce consequences that force their way into the represented domain. A nervous system running on three channels will feel normal to the person running on it — until the structural anhedonia from dynorphin accumulation produces a chronic flatness that they cannot attribute to any specifiable cause, because the cause is the absence of signals rather than the presence of something wrong.

The monoculture problem is, in each case, an absence that produces damage.

Why these are instances, not analogies

A persistent worry about cross-level arguments of this kind is that they are merely metaphorical — that calling neurochemical range contraction and GDP monoculture by the same name is literary convenience rather than theoretical insight. This worry deserves a direct response.

The rate-distortion framework provides a literal, not metaphorical, unification. At each level:

The claim that these are instances of the same problem means that the same formal operation — the narrowing of effective input diversity relative to what the compression mechanism requires — produces structurally similar damage profiles across all five levels. It does not mean the mechanisms are identical. It means they belong to the same class of failure.

This matters for diagnosis and for remedy. If the five levels were merely analogous, each would require a domain-specific fix. If they are instances of the same problem, then the solution type — restore signal diversity to the compression mechanism that requires it — has a common structure across domains, even as its implementation is domain-specific.

A common template for each level

For the cross-level claim to be more than architecturally convenient, each level section must specify the same four quantities. Where these cannot be specified, the claim that rate-distortion theory applies literally — rather than analogically — cannot be sustained.

1. What is being compressed: The input signal space and the representation that the compression mechanism produces from it. This must be concrete: not "information" in the general sense, but a specifiable source distribution and a specifiable output representation.

2. The input distribution: The range of signals that actually reach the compression mechanism under the conditions being analysed. This is where monoculture enters: the claim is that late modernity narrows this distribution relative to what the mechanism evolved or developed to handle.

3. The codebook: The shared representational framework that makes compression efficient. In language, this is common ground. In governance, it is the shared model of what counts as a signal worth acting on. Codebook degradation is the second pathway through which monoculture damages compression.

4. The distortion measure: What counts as a bad reconstruction in this domain. This is the hardest element to specify and also the most important: it determines whether the compression is succeeding or failing. A nervous system that activates only three neuromodulatory channels while leaving the remainder chronically dark has high distortion by the measure of experiential range, even if it performs adequately by the measure of arousal response.

Each of Sections 3–7 is built on this template. Where the template cannot be fully populated — where the input distribution is difficult to measure or the distortion function remains underspecified — this is noted explicitly. The framework's value is not that it provides ready answers at every level but that it makes the precise nature of the remaining empirical questions visible.

What would disconfirm the argument

Any serious scientific claim must specify what would count against it. Three conditions would substantially weaken this paper's central thesis.

First, if it were shown that the compression mechanisms at each level are not, in fact, input-diversity dependent — that the sleeping brain performs equally well on homogeneous versus diverse waking input, or that language communities with reduced shared context communicate with equal efficiency — the empirical basis of the argument at those levels would collapse.

Second, if the cross-level correlations predicted by the framework were absent — if neurochemical monoculture did not co-occur with linguistic fragmentation, or if economic monoculture did not co-occur with institutional brittleness — the claim that these are instances of a common problem would lose support, even if each level's argument remained individually defensible.

Third, if systems that exhibit high signal diversity in one domain routinely showed worse compression performance — if shared worlds produced less efficient communication, if neurochemical range produced less adaptive behaviour — the sign of the argument would be reversed and the framework would require fundamental revision.

The paper does not assert that these disconfirming conditions are empirically ruled out. It asserts that the current evidence, surveyed across five levels in the sections that follow, supports the argument as stated — and that the framework generates sufficiently specific predictions that future evidence can, in principle, settle the question.


[Sections 3–7 follow: neurochemical, cognitive, linguistic, economic, and institutional levels in turn. Section 8 develops cross-level connections and causal mechanisms. Section 9 addresses implications for design, policy, and curriculum.]


Neurochemical Monoculture and the Contraction of Range

[First of the five level-specific sections, placed third in the paper's running order after the introduction and theory framework.]


Overview

The human nervous system is not a single communication channel. It is an ensemble of over one hundred identified signalling systems — neuropeptides, neurosteroids, neuroactive amino acids, trace amines, growth factors, cytokine-mediated signals, and the classical monoamines that dominate public discourse — each tuned to detect, encode, and respond to different aspects of the experiential world. The architecture is redundant, overlapping, and deeply ecological: it evolved in environments that regularly activated the full repertoire, not as a luxury but as a functional requirement for navigating a world that required awe and grief and embodied fatigue and meditative stillness as well as threat and reward and social warmth.

This section applies the rate-distortion framework to the neurochemical level. The central argument is that late modernity's dominant stimulus architecture selectively and repeatedly activates a narrow subset of this signalling ensemble while the conditions required to activate the remainder are systematically removed. This is not claimed as an established empirical fact about measurement-level neuromodulator concentrations across populations — that data does not currently exist in the required form. It is a framework hypothesis: a theoretical account, grounded in the functional architecture of the neuromodulatory system and in the ecological conditions of its original development, that generates testable predictions and that is consistent with available pharmacological, clinical, and behavioural evidence.

The section first applies the four-part template, then characterises the three overactivated systems and the counterregulatory mechanism that drives the section's most specific and most empirically anchored claim, then surveys the neglected architecture and the convergent conditions its activation requires, and finally addresses what would confirm and disconfirm the hypothesis.


The template applied

ElementThis level
What is compressedThe full space of experiential states → behavioural and affective outputs that determine wellbeing, motivation, and adaptive response
Input distributionThe ecological and social signal environment: the full range of experiences, challenges, relationships, and conditions that activate different parts of the neuromodulatory architecture
CodebookThe neuromodulatory system itself: the ensemble of signalling systems whose joint activation patterns encode the experiential world and produce adaptive responses to it
Distortion measureLoss of experiential and functional range: the degree to which the system's output is constrained to states activatable by the narrow input distribution it actually receives, relative to the states activatable by the full distribution it evolved to handle

The monoculture claim at this level: late modernity's dominant stimulus architecture narrows the input distribution, which narrows the activation patterns across the ensemble, which narrows the experiential range available to the organism. The system continues to function within the narrowed range — it is not damaged in the clinical sense — but it progressively loses the capacity to activate states outside the range of its habitual inputs. Range contraction is the primary damage profile.


3.1 The dominant triad: functional characterisation

The framework hypothesis identifies three neuromodulatory systems as disproportionately activated by late modernity's stimulus architecture, each through a specific and identifiable mechanism.

Dopaminergic anticipation. Dopamine mediates anticipation, seeking, and prediction of reward — not reward itself. Its functional role is to orient the organism toward predicted value, firing most powerfully not at receipt but at the signal preceding receipt, and most powerfully of all when that signal is unpredictable. Variable-ratio reinforcement schedules — which generate unpredictable rewards and thereby sustain dopaminergic activation without the resolution signal that would permit the system to settle — are the explicit design logic of social media feeds, notification architectures, and algorithmic recommendation systems. Chronic exposure to this schedule pattern downregulates dopamine D2 receptor density in animal models and, in studies of internet and gaming disorder, in human subjects. The direction is consistent: repeated high-frequency activation without natural resolution reduces receptor sensitivity, requiring progressively more stimulus for equivalent activation and leaving ordinary, unmediated experience increasingly insufficient. The claim that this pattern is producing population-level receptor changes from social media and smartphone use specifically is an extrapolation from those findings and should be read as hypothesis.

The cortisol stress axis. The hypothalamic-pituitary-adrenal axis evolved for acute, time-limited threat. Its adaptive function requires resolution: the threat ends, the system returns to baseline. Chronic low-grade activation — produced by financial precarity, social comparison, information overload, and artificial light disruption of circadian rhythms — provides no such resolution. Sustained cortisol elevation above acute thresholds is associated in the clinical literature with measurable volumetric reduction in hippocampal grey matter, the structure most central to memory consolidation, spatial navigation, and temporal context. The implication for the present argument is that a chronically activated stress axis is not merely unpleasant; it structurally degrades the capacity for the contextual, temporally extended forms of experience — sustained attention, autobiographical coherence, spatial immersion — that require an intact hippocampal system to generate.

Shallow oxytocin. Oxytocin mediates acute social warmth and affiliation, but its close neuropeptide relative vasopressin governs sustained pair-bonding, long-term attachment, and loyalty. Vasopressin accumulation requires repeated embodied co-presence over time; it cannot be generated by a single encounter, a parasocial relationship, or professional contact structured around project timelines. Digital and parasocial media environments are effective at triggering oxytocin-associated warmth responses while structurally unable to produce the vasopressin-mediated depth that sustained bonds generate. The additional complication is that oxytocin without the moderating influence of long-term attachment amplifies in-group/out-group differentiation — producing warmth that is simultaneously shallow and, at scale, potentially divisive.

These three characterisations are offered at different evidential levels. The dopaminergic receptor downregulation finding is anchored in replicated preclinical and clinical research, though its application to social media specifically requires caution. The cortisol-hippocampal relationship is well-established; the prevalence of the chronic low-grade activation pattern in high-income digital environments is consistent with self-report and actigraphy data but not directly measured at the required resolution. The oxytocin-vasopressin distinction is functionally grounded in the neuroendocrine literature; the population-level implication about digital sociality is theoretical.


3.2 The dynorphin floor: the section's most specific claim

Every peak has a floor. Dynorphin — the functional mirror image of the endorphins, acting on kappa-opioid receptors — produces dysphoria, perceptual dulling, and motivational withdrawal. It is the homeostatic counterweight to activation states: each substantial dopaminergic peak is followed by a dynorphin-mediated rebound, and the magnitude of the trough is related to the magnitude of the peak.

In conditions of episodic stimulation — the ancestral norm — this oscillation produces healthy cycles: motivated seeking followed by recuperative rest, peak activation followed by grounded baseline. In conditions of chronic high-frequency stimulation — the default condition of high-income digital environments — the oscillation produces a ratchet effect. Each cycle leaves the dopamine system slightly less sensitive; troughs progressively deepen; the baseline loses its capacity to register as adequate. The behavioural and affective consequence is what this paper proposes to call structural anhedonia: not the acute despair of clinical depression, but a persistent background flatness in which ordinary experience fails to register as sufficient.

This is the section's most specific and most empirically defensible claim. The preclinical evidence for dynorphin-mediated counterregulation following dopaminergic activation is robust. The pharmacological evidence for its role in mood dysregulation and motivational blunting is well-established, which is why kappa-opioid receptor antagonists are currently in clinical development for depression and anhedonia. What the paper adds — and what is explicitly marked as a theoretical extrapolation — is the population-level application: the hypothesis that chronic high-frequency variable-ratio stimulation, at the scale delivered by the contemporary attention environment, is producing a low-grade dynorphin-mediated anhedonic baseline in a substantial proportion of the population. The phenomenological signal — the widespread report of background flatness, insufficient ordinary experience, and the inability to feel satisfied by activities that previously registered as adequate — is consistent with this hypothesis. Direct measurement of kappa-opioid receptor occupancy and dynorphin levels in naturalistic populations would be required to confirm it.

The construct structural anhedonia is proposed to name this phenomenon specifically: it is structural in the sense that it is produced by the architecture of the stimulus environment rather than by individual pathology, and it is distinct from clinical anhedonia in that it does not require a diagnosis and is not accompanied by the broader vegetative and cognitive symptoms of major depression. Its empirical signature would be: lower hedonic baseline in proportion to engagement-architecture exposure, recoverable in principle by reduction of chronic dopaminergic stimulation and restoration of oscillation conditions, and distinct in its neural correlates from clinical anhedonia.


3.3 The neglected architecture: three convergent activation conditions

The neuromodulatory ensemble extends far beyond the three overactivated systems. Among the less-activated or structurally neglected systems are: the endogenous opioid system beyond dynorphin (including β-endorphin-mediated states of sustained physical exertion and deep social bonding); the endocannabinoid system governing sensory presence and the muting of intrusive cognition; the serotonergic system in its deeper, slower-acting forms rather than the acute effects targeted by SSRIs; neuropeptide Y and its role in stress resilience; GABA-mediated systems in sustained rest states; oxytocin in its vasopressin-related deep attachment form; and the various systems implicated in flow states, awe, grief, and the experience of temporal depth.

What is striking, and what the framework hypothesis emphasises, is that the activation conditions for this neglected architecture are not arbitrary. They converge on three features that the late-modern stimulus environment most systematically removes.

Slowness. Many of the neglected neuromodulatory states require extended, uninterrupted time to develop. β-endorphin release in sustained physical exertion requires duration; endocannabinoid-mediated sensory presence requires the absence of competing attentional demands; the deeper serotonergic states associated with clarity and equanimity require extended periods without acute dopaminergic activation. A stimulus environment structured around variable-ratio engagement mechanisms, constant connectivity, and the systematic elimination of unstructured time removes the temporal condition that these states require.

Physical embodiment. A substantial portion of the neglected architecture is activated through processes that require the body's direct engagement with physical and social environments: exercise, touch, manual work, outdoor exposure, co-present social interaction, and the sensorimotor engagement of skilled craft and physical play. Environments structured around screen-mediated interaction and sedentary work progressively eliminate these activation conditions without eliminating the underlying neuromodulatory architecture that depends on them.

Sustained attention over time. Several of the neglected systems are activated specifically by deep, extended engagement with a single object of attention: the reading of long-form narrative, the sustained practice of a musical instrument, the gradual mastery of a complex skill, the deepening of a long relationship. Attentional fragmentation — the structural feature most characteristic of the notification-mediated, multitab, algorithmically optimised information environment — is directly antagonistic to these activation conditions.

The convergence of these three deprivation conditions is theoretically important: it suggests that the neglected architecture is not randomly distributed across the neuromodulatory system but is concentrated in the systems most dependent on the temporal, embodied, and attentional features of experience that are systematically removed by late modernity's dominant design logic. This would explain why the phenomenology of neurochemical monoculture is not merely the presence of high stimulation but the specific character of what is absent: the depth, groundedness, and temporal extension of experience that the neglected systems generate.


3.4 The rate-distortion account at the neurochemical level

Applying the template with precision:

The input distribution at this level is the ecological and social signal environment — the full range of experiences, conditions, and challenges that activate the neuromodulatory ensemble. In ancestral environments, this distribution was rich and varied: physical exertion, slowness, sustained social relationships, manual and cognitive challenge, seasonal variation, embodied engagement with natural environments, the full spectrum of emotional experience including grief and awe and terror and belonging. Late modernity does not eliminate all of this, but it systematically narrows the distribution toward high-frequency, low-depth, digitally mediated, sedentary, and chronically alert signal types.

The codebook is the neuromodulatory system itself — the ensemble of signalling systems whose joint activation patterns constitute the brain's representation of the experiential world. The codebook was built over evolutionary time on the assumption that the full input distribution would be regularly encountered. Its compression efficiency — its ability to represent experience faithfully and to produce the adaptive responses that experience requires — depends on that full distribution being available.

What the distortion measure captures at this level is the loss of experiential and functional range: the degree to which states activatable only by the neglected inputs become inaccessible. The system continues to compress well within the narrow range it receives; it produces adequate outputs for the signals it does encounter. The distortion appears at the edges of the range — in the atrophy of states that require the inputs that are no longer arriving, and in the structural anhedonia produced by the dynorphin counterregulation that high-frequency activation of the remaining channels generates.

This is the rate-distortion signature of input monoculture rather than codebook degradation: the codebook itself is not damaged (the neuromodulatory architecture remains physiologically intact in most individuals), but the input distribution is so impoverished relative to what the codebook was built for that large parts of the representational space are chronically unused. Sustained deprivation of input to any compression system degrades its responsiveness to those inputs even when the mechanism itself is intact — a well-known effect in sensory deprivation research and in the use-dependence of synaptic plasticity. Neurochemical monoculture is, in this sense, a form of learned narrowness rather than structural damage.


3.5 Evidential status and what would disconfirm this argument

The argument at this level is explicitly multi-tiered, with three distinct levels of evidential confidence.

Well-supported: The functional architecture of the neuromodulatory ensemble and the activation conditions for its different components. The dopamine receptor downregulation pattern in high-frequency stimulation conditions. The cortisol-hippocampus relationship. The dynorphin counterregulatory mechanism and its pharmacological implications. These are not contested; they are the empirical anchors of the framework hypothesis.

Supported with appropriate caveat: The characterisation of late modernity's stimulus environment as selectively activating the triad while removing the activation conditions for the neglected architecture. This rests on convergent inference from functional descriptions of what different technologies do, from self-report and behavioural data on how time is allocated, and from ecological and epidemiological patterns — but it does not rest on direct neuromodulatory measurement in naturalistic populations.

Framework hypothesis requiring direct test: The structural anhedonia claim at population scale. This is the paper's most original and most empirically underspecified claim at this level. Its signature is defined above; the direct test would require naturalistic measurement of kappa-opioid receptor occupancy, hedonic baseline scores in proportion to engagement architecture exposure, and longitudinal data on hedonic recovery following structured disengagement from high-frequency stimulation environments.

Two findings would substantially weaken the argument. First, if naturalistic measurement showed that kappa-opioid receptor occupancy or hedonic baseline did not correlate with engagement architecture exposure, the structural anhedonia hypothesis would lose its primary biological anchor. Second, if the neglected neuromodulatory systems were routinely activated in high-income digital environments through alternative pathways — if, for instance, the endocannabinoid or endogenous opioid systems were adequately engaged through sedentary and screen-mediated activity — the range contraction claim would be undermined. Neither finding would disprove the rate-distortion framework at this level, but either would require significant revision of the specific empirical claims it is currently making.


3.6 Connection upward and downward

The neurochemical level is the most foundational in the paper's causal architecture, in one specific sense: the experiential range available to the organism shapes what enters the waking experience trace from which Level 2's cognitive compression must operate.

A nervous system running on neurochemical monoculture — chronically engaged with high-frequency dopaminergic stimulation, chronically cortisol-activated, chronically deprived of the slower, deeper, embodied states in the neglected architecture — produces a waking experience that is tonally narrow, emotionally repetitive, and attentionally fragmented. This is exactly the kind of impoverished input that Level 2's graph distillation model predicts will degrade sleep compression. The sleeping brain cannot distil depth from waking experience that contains no depth. The connection between Level 1 and Level 2 is not analogical: it is that the neuromodulatory state of waking hours directly shapes the quality and variety of the experience trace that the sleep distillation operator must then compress.

Upward to Level 3: the neurochemical state of the interlocutors shapes their capacity to build and sustain common ground. States of chronic cortisol activation increase threat perception and in-group/out-group differentiation. Dopamine depletion reduces motivational investment in the effortful work of perspective-taking and repair that shared-world construction requires. The neurochemical architecture of the interlocutors is part of the context x in the rate-distortion formalism of language — and a systematically narrowed neurochemical range produces a systematic bias in the communicative context toward states that impede rather than support shared-world construction.


[Section 4 follows: Sleep as Graph Distillation and the Cognitive Compression Failure.]


Sleep as Graph Distillation and the Cognitive Compression Failure

[Second of the five level-specific sections, placed fourth in the paper's running order.]

Overview

Sleep is the brain's most explicit compression operation, and the one whose formal structure is most precisely specifiable. Unlike the neurochemical level, where the compression mechanism is an ecological inference from functional architecture, or the linguistic level, where the compression is distributed across interlocutors and time, the sleeping brain's compression process has an identifiable computational structure, a specifiable objective function, and a mathematically defined relationship between input quality and output quality. This makes Level 2 the paper's most formally tractable level — and the one that most directly demonstrates, rather than analogises, the rate-distortion account.

The framework developed in the institute's prior work (Ricketts and Jhingan, 2026) models sleep as a graph distillation operator: a process that transforms the day's waking experience trace — represented as a weighted graph of associations, transitions, and contextual connections — into a sparser, more modular, higher-utility representation. The formal objective is to minimise the description-length complexity of the resulting graph while preserving its utility for prediction, retrieval, and decision-making. The claim of this section is that this distillation process is input-dependent in a specific and consequential way: it requires a waking experience trace that contains genuine semantic and structural variety. When the input is homogeneous — when the waking day produces an experience graph that is tonally narrow, semantically repetitive, and attentionally fragmented — the compression algorithm cannot identify what is structurally important, because importance is defined by contrast with the rest of the distribution.


The template applied

ElementThis level
What is compressedThe day's waking experience trace (a weighted graph of events, associations, and contextual connections) → a sparse, modular, high-utility memory structure
Input distributionThe variety and structural richness of waking experience: the semantic range, emotional diversity, and attentional depth of what is encountered during waking hours
CodebookThe graph structure of prior knowledge: the existing network of concepts, memories, and relationships that determines which new experience nodes connect where and therefore which are recognised as structurally central
Distortion measureLoss of memory utility: the degree to which the post-sleep graph fails to retain structurally important material, conflates distinct concepts, or produces degraded retrieval and prediction performance

The monoculture claim at this level: when waking experience is systematically homogenised — when the experience trace is dominated by a narrow range of emotional registers, attentional modes, and semantic domains — the distillation operator cannot distinguish signal from background. The result is a compressed representation that is either too sparse (important material pruned because it lacks relative salience) or poorly structured (similar but distinct concepts merged because the contrast that would distinguish them was never encountered).


4.1 The formal framework: sleep as graph distillation

The Ricketts and Jhingan (2026) framework begins from a dissatisfaction with the dominant theoretical vocabulary of sleep and memory research. Terms like "consolidation," "abstraction," and "gist extraction" describe what sleep does without specifying an optimisation objective — what criterion is sleep solving for? The graph distillation model supplies that objective:

Minimise ℒ(G′) subject to 𝒰(G′) ≥ 𝒰(G) − ε

where G is the waking experience trace graph, G′ is the post-sleep graph, ℒ is a description-length complexity functional (the biological cost of maintaining the representation), 𝒰 is a utility functional (the graph's value for future retrieval, prediction, and decision-making), and ε bounds the permitted utility loss. The distillation process must produce a simpler representation while preserving capacity for adaptive action.

Five operators implement this objective, each with a direct biological correlate. Global shrinkage (synaptic homeostasis) uniformly downscales all edge weights. Replay reinforcement (NREM sharp-wave ripples) selectively strengthens edges activated during consolidation. Edge sparsification prunes weakly weighted connections below a threshold. Coarse-graining merges closely related nodes into supernodes — the mechanism of schema and gist formation. Index construction adds shortcut edges between hub nodes for efficient retrieval.

The key prediction that follows from the interaction of global shrinkage and replay reinforcement is centrality preservation: edges connecting structurally central nodes — those with many connections, high betweenness centrality, or high clustering coefficients — receive disproportionate replay protection and therefore survive the sparsification threshold at higher rates than peripherally connected edges. This is a quantitative prediction, not merely a qualitative one: under replay policies biased toward structural salience, the expected post-sleep weight of an edge is proportional to its structural score.

This prediction is empirically anchored in the findings of Feld, Bernard, Rawson, and Spiers (2022), who showed that sleep preferentially consolidates memories involving globally and locally central nodes in explicitly learned graph networks. This cannot be derived from non-graph-based consolidation accounts; it is a specific structural prediction that the distillation model generates and the empirical record confirms.


4.2 The input-quality dependency

The critical property of this framework for the monoculture argument is that the distillation operator's performance is not independent of its input. Centrality preservation requires that the input graph contain genuine structural variation — nodes that are genuinely more connected, more conceptually central, more predictively valuable than others. If the input distribution is homogeneous, centrality differences collapse: when everything encountered during the day is structurally similar (same emotional register, same attentional mode, same semantic domain, same stimulation pattern), the algorithm cannot discriminate what is worth preserving from what is not, because the contrast that makes discrimination possible is absent.

This is the formal mechanism connecting Level 1 to Level 2. A waking day dominated by neurochemical monoculture — high-frequency dopaminergic stimulation with attentional fragmentation, without the depth and variety of the neglected neuromodulatory architecture — produces an experience trace graph that is tonally flat, semantically narrow, and structurally repetitive. Scrolling, notification-checking, brief social contact, and passive consumption generate many edges in a small semantic neighbourhood rather than fewer edges spanning a rich and varied conceptual space. The distillation operator receives impoverished input: a dense local cluster where structural centrality is poorly differentiated, rather than a sparse-and-varied graph where centrality is meaningful.

The consequence for the compressed output is precisely what the framework predicts. Replay will strengthen the most-connected nodes, but in a homogeneous graph those nodes are only marginally more connected than their neighbours. The resulting post-sleep representation will be denser than optimal (insufficient sparsification, because nothing clearly dominates), less modular than optimal (coarse-graining merges items that are merely similar rather than conceptually equivalent), and shallower than optimal (index construction is less efficient when the graph has no clear hub structure). The memory representation that serves the following day is less useful, less well-organised, and less retrieval-efficient than the same waking hours with richer experiential input would have produced.

There is a secondary effect worth noting. Sleep deprivation and sleep disruption are well-documented consequences of the same stimulus environment that produces neurochemical monoculture — artificial light disruption of circadian rhythms, late-night screen engagement, and the hyperarousal associated with chronic cortisol activation all fragment or truncate sleep architecture. The present argument adds a dimension to this observation: it is not only that less sleep impairs memory; it is that even sufficient sleep on impoverished input produces distorted memory. The quantity of sleep and the quality of its input are both relevant, and standard sleep research has attended much more carefully to the former than the latter.


4.3 Evidential status

The formal framework is anchored in empirical evidence at two levels. The operator family (global shrinkage, replay reinforcement, sparsification, coarse-graining, index construction) maps directly onto established sleep mechanisms with strong empirical support. The centrality preservation proposition is derived mathematically from those operators and receives empirical support from the Feld et al. (2022) data. These are the framework's solid foundations.

The input-quality dependency claim — that homogeneous waking input degrades distillation quality — is a theoretical extrapolation from the formal structure. It generates a specific prediction: participants whose waking hours involve richer experiential variety (as operationalised by measures of semantic range, emotional diversity, and attentional depth in daily experience sampling) should show better post-sleep graph utility (measured by consolidation quality on explicitly structured learning tasks) than participants with more homogeneous waking experience, controlling for sleep quantity and architecture. That experiment has not been conducted. It is the clearest falsifiable prediction the framework generates at Level 2, and it constitutes a priority for the empirical research programme.


4.4 Cross-level connections

The upward connection from Level 2 to Level 3 is functionally direct. The post-sleep graph — the memory representation that the distillation operator produces — is part of the substrate from which shared world is built. Common ground in language is not just shared cultural reference and current events; it is the accumulated conceptual and experiential architecture that makes certain meanings immediately available and certain inferences automatic. A person whose sleep compression has been operating on impoverished input over sustained periods has a shallower, less differentiated, less richly structured conceptual graph. They bring less to the construction of common ground; their codebook is thinner and less varied; their communicative compression requires more explicit encoding for the same meaning. Level 2's cognitive compression failure is, in this sense, a downstream driver of Level 3's linguistic compression failure.

The downward connection — back to Level 1 — is equally tight. Sleep quality modulates the sensitivity of the neuromodulatory systems most implicated in the neurochemical monoculture hypothesis. Sleep-deprived individuals show elevated dopamine receptor sensitivity and heightened reward salience for immediate stimuli, reduced prefrontal regulation of impulsive seeking, and increased HPA axis reactivity. Poor-quality sleep therefore feeds back to amplify the neurochemical conditions that produce the impoverished input that degraded sleep compression produces. The two levels form a self-reinforcing loop: neurochemical monoculture degrades sleep input quality, degraded input quality degrades sleep compression, degraded sleep compression amplifies neurochemical monoculture susceptibility.



Linguistic Monoculture and the Fragmentation of Shared Worlds

[This is the third of the five level-specific sections, placed fifth in the paper's running order after the introduction, theory framework, neurochemical level, and cognitive/sleep level.]


Overview

Language is a compression system. Every utterance is a compressed encoding of a meaning that the speaker cannot transmit in full, directed at a listener who must reconstruct something close enough to that meaning for the communication to succeed. That this process happens billions of times a day, mostly successfully, mostly without effort, is easy to take for granted. Understanding why it works — and what happens when it starts to fail — requires being precise about what the compression depends on.

This section applies the rate-distortion framework to language and communicative infrastructure. The argument proceeds in four steps. First, we establish the rate-distortion account of human communication, drawing on the institute's prior formal work. Second, we identify what functions as the codebook in this domain — shared worlds, or common ground — and show that its degradation directly increases encoding cost and distortion. Third, we characterise late modernity's distinctive effects on shared worlds, identifying the mechanisms through which it fragments the compression infrastructure of language. Fourth, we apply the four-part template to specify what exactly is being compressed, what the input distribution is, what the codebook is, and what counts as distortion failure — and use this to generate testable predictions.


The template applied

ElementThis level
What is compressedIntended meaning → acoustic/textual signal, such that a listener can reconstruct the meaning within acceptable distortion
Input distributionThe full space of communicable meanings within a community, shaped by the richness of shared experience, expertise, history, and cultural reference
CodebookCommon ground: the overlapping stock of knowledge, experience, convention, and expectation that speaker and listener share and can rely on without explicit encoding
Distortion measureMismatch between intended meaning and reconstructed meaning; includes failed inference, misunderstanding, and the subtler degradation in which communication becomes technically correct but affectively or semantically shallow

The monoculture claim at this level: late modernity systematically degrades common ground — and therefore degrades the codebook — while simultaneously homogenising the input distribution toward a narrow range of communicable meanings. Both pathways damage compression.

A note on terminology used throughout this section: shared world refers to the accumulated referential environment a community holds in common — history, culture, expertise, convention. Common ground refers to the subset of that shared world that is active and accessible in a particular communicative context. Codebook is the technical rate-distortion term: the representational framework that allows encoder and decoder to share compressed representations efficiently. These three are deliberately layered, not synonymous. Shared worlds enable common ground; common ground functions as the codebook for real-time compression. When this section describes late modernity as degrading the codebook, it means that degradation runs through all three levels: the shared world erodes, which impoverishes available common ground, which undermines the compression efficiency of actual communication.


5.1 The rate-distortion account of language

Cross-linguistic research has documented a striking regularity: languages with higher information density per syllable are spoken more slowly, while languages with lower density are spoken faster, such that a broadly similar speech information rate — approximately 39 bits per second — is observed across typologically diverse languages. The tempting interpretation is that this reflects a hard physiological ceiling on the human communication channel.

The institute's prior formal work on this pattern (Ricketts, 2026a) argues that this overstates the evidence and misidentifies the mechanism. The convergence is better understood as evidence that human languages inhabit a context-adaptive regime of collaborative compression: a regime in which utterance rate, redundancy, and tolerated ambiguity are jointly calibrated to support low-distortion meaning reconstruction under finite real-time inferential capacity. The 39 bits/s figure is not a ceiling. It is a trace of an equilibrium — one among many possible points on the rate-distortion curve — that reflects the compression regime human communication has settled into given its typical context conditions.

The formal core of this account is a context-indexed rate-distortion function. Given context x, the minimum encoding rate required to transmit meaning m within distortion bound D is:

R_x(D) = min_{p(u|m,x)} I(M; U | X = x) subject to E[d_x(m, m̂)] ≤ D

Two features of this formalism matter for the present argument.

First, conditioning on context x reduces the mutual information that the utterance must carry: when the listener already knows a great deal about what the speaker is likely to mean — because they share a history, a domain of expertise, a set of cultural references, a conversational record — the utterance needs to resolve less residual uncertainty. Rich common ground directly lowers the minimum encoding rate required for a given distortion level. Shared world is therefore not social background to communication. It is, in the formal sense, compression infrastructure.

Second, the distortion function d_x is itself context-indexed. Losing a hedge in casual conversation and losing a hedge in a surgical instruction are not equivalent distortions; they are governed by different distortion functions reflecting different stakes, different conventions, and different tolerance for inferential error. This means that the assessment of whether a compression regime is working cannot be made from outside the communicative context. It requires knowing what the relevant distortion function is — what kinds of meaning loss matter in this communicative situation.


5.2 Common ground as compression machinery

Experimental work in collaborative reference — beginning with Clark and Wilkes-Gibbs (1986) and extended by a substantial subsequent literature — provides the most direct empirical support for this account. In these paradigms, partners describing abstract figures to each other progressively shorten their descriptions across trials, converging on private labels that carry high information for this pair at minimal encoding cost. Critically, the compression is partner-specific: the efficiency gains track accumulated common ground, not mere repetition or familiarity effects, and do not transfer to new partners who lack the shared referential history.

This is exactly what the formalism predicts. The context x in the rate-distortion function is partner-specific, accumulated turn by turn through the interaction dynamics captured in the formalism's context-update equation. Common ground is not a fixed background but a jointly constructed resource, built through communication and enabling future communication to be more efficient. The compression infrastructure of language is, in this sense, inherently relational and historical.

Three implications follow that will matter for the monoculture argument.

First, the most efficient communicators are not those with the largest vocabularies or the most explicit encoding habits, but those with the richest shared context relative to their interlocutors. Expert pairs in a domain communicate with high efficiency not because they use more precise language, but because their shared codebook allows them to use less language to transmit more.

Second, early miscommunications that are repaired may actually accelerate subsequent compression efficiency. A miscommunication that is noticed, corrected, and resolved leaves a durable trace — a newly established convention, a clarified referent — that smooth exchanges do not. This means that systems designed to eliminate miscommunication entirely may paradoxically impoverish the common ground that efficient communication depends on.

Third, the progressive compression visible within conversations is fragile with respect to context change. Move the same two partners to a new domain, introduce new participants, or disrupt the accumulated conversational history, and compression efficiency collapses back toward baseline. The codebook that enables efficiency is local and relational; it does not automatically transfer.


5.3 Late modernity as a shared-world fragmentation machine

If common ground functions as compression infrastructure, then anything that systematically degrades common ground degrades the communicative capacity of a community. Three mechanisms do this simultaneously in late modernity.

Algorithmic content personalisation optimises for individual engagement, not for shared exposure. Over time this produces communities in which members consume non-overlapping information environments. The shared cultural reference that enables efficient cross-member communication progressively narrows. Codebooks diverge. The resulting communication continues to occur, but at higher encoding cost for a given distortion level — or higher distortion for the same effort. Crucially, this is distinguishable from simple disagreement: two people who share a common informational world but hold opposing views can still communicate efficiently about what they share. Two people algorithmically sorted into non-overlapping environments lack common ground from which efficient communication can proceed, even when they seek it. They are encoding and decoding with incompatible codebooks.

Platform truncation of conversational context degrades common ground construction at the interaction level. The context-update dynamics through which common ground is built depend on sustained, bidirectional exchange with repair mechanisms and grounding feedback. Digital communication platforms, optimised for scale and brevity, structurally attenuate these dynamics. Character limits, asynchronous exchange, absent non-verbal cues, and the audience presence effect — the awareness that communication is recorded and may be seen beyond the intended interlocutor — all reduce common ground accumulated per exchange relative to face-to-face interaction. What resembles communication produces less compression infrastructure than the surface activity suggests.

Social sorting erodes cross-group shared worlds through a slower process. Residential, educational, and occupational clustering has reduced the frequency and depth of sustained contact across demographic and ideological lines in high-income societies. The shared worlds that enable efficient cross-group communication — overlapping cultural knowledge, the experience of navigating disagreement within sustained relationships — are products of that contact. Codebook degradation here is not visible in any single communicative act; it accumulates across a population over time, producing a slow rise in inter-group encoding cost and distortion that appears at the cultural level as polarisation and misattribution.

LLMs as diagnostic probe. The behaviour of large language models provides an instructive, if diagnostically limited, window on what language looks like when compression runs without shared world. LLMs trained on vast corpora produce locally fluent output — sentence and paragraph coherence is high, vocabulary and register are appropriate — while exhibiting a characteristic failure mode: confident assertion without epistemic warrant, apparent creativity that resolves on inspection to statistical proximity in embedding space, global semantic shallowness beneath local surface density. The rate-distortion framework predicts an important component of this pattern. A system that has learned the distributional regularities of a codebook without access to the referential world that codebook was built to encode can produce utterances that match the distributional profile of well-compressed meaning while not being well-compressed meaning. It is worth being precise about what this argument does and does not claim: the framework does not offer a complete account of current LLM behaviour, which reflects multiple interacting factors including training objectives, scale effects, and RLHF dynamics. The claim is narrower — that the absence of shared world predicts a specific distortion signature, visible in referential coherence across contextual distance, that is part of what the empirical record shows.


5.4 The input distribution: a subordinate but real pathway

The codebook degradation argument is the primary compression mechanism at this level, and it is the one most directly supported by formal theory and experimental evidence. There is a second, related pathway worth identifying — while being clear that it is currently less well-evidenced and in some respects harder to separate from the first.

The input distribution at Level 3 is the range of communicable meanings that actually circulates within a community: the semantic field traversed by public discourse, the range of registers, concerns, and experiences that people routinely communicate about and therefore develop shared vocabulary for. This distribution is shaped by media architecture.

Platform-mediated communication optimised for engagement systematically skews toward high-arousal, polarising, or novelty-signalling content. The hypothesis — stated here as hypothesis — is that this narrows not just tone but the semantic field of public discourse: the contemplative, the locally specific, the technically complex, the morally nuanced, and the slow are progressively displaced by the subset of communicable meanings that generate high-frequency engagement responses. If this is correct, the consequence for compression efficiency mirrors Level 1 and Level 2: a communication community whose shared discourse covers a narrow range of semantic space develops a codebook calibrated to that range, and becomes progressively less capable of efficiently encoding or decoding meanings outside it.

The distinction from codebook degradation is that here the loss is not in shared-world depth (common ground per topic) but in shared-world breadth (the range of topics for which common ground has been built at all). Both pathways damage the compression system; they do so through different mechanisms and would require different empirical approaches to distinguish. Separating them is a task for future work. For the current synthesis, it is sufficient to note that they are both instances of input poverty — the compression mechanism receiving less diverse signal than it was built to handle — and that the codebook pathway is the one on which the formal and empirical grounding is currently stronger.


5.5 Distortion and its asymmetry

A distinctive feature of communicative distortion is its asymmetry. Successful communication leaves few traces — the meaning went through, the interaction ended, and neither party is aware that compression worked. Failed compression leaves many traces, but they are attributed to bad faith, stupidity, or irresolvable disagreement rather than to codebook mismatch.

This asymmetry has a diagnostic implication. The degradation of shared worlds will not appear primarily as a visible failure of communication. It will appear as a diffuse increase in the effort required to communicate across difference, a rise in the rate of miscommunication that is attributed to the wrong causes, and a gradual withdrawal from communicative contexts where the codebook mismatch is likely to be costly. People do not typically say "I am avoiding this conversation because our shared world has thinned to the point where compression efficiency is low." They say "it's not worth the effort," "we'll never agree," or simply disengage. The distortion is real; its cause is invisible within the framework that generates it.

The rate-distortion account makes this mechanism visible. It predicts that communities experiencing shared-world fragmentation will show increased communicative effort, reduced communicative range, increased attribution of communicative failure to fixed properties of the interlocutor (stupidity, bad faith), and progressive withdrawal from high-codebook-mismatch contexts. These are measurable. They are also the pattern that a substantial empirical literature on political polarisation, social cohesion, and cross-group communication documents, without providing a mechanism for it.


5.6 Empirical predictions and evidential status

The framework generates several predictions that go beyond what existing accounts provide.

Prediction 1 — Compression efficiency should be partner- and history-specific, not just domain-specific. Existing work largely confirms this within experimental dialogues. The monoculture extension predicts that pairs embedded in richer shared informational environments — more overlapping consumption of news, more shared cultural reference, more sustained contact — should show faster compression convergence in collaborative tasks than pairs whose informational environments have been algorithmically sorted toward divergence. This is testable with current technology; it has not been tested.

Prediction 2 — Cross-group communicative efficiency should have declined measurably in high-sortation environments over time. If social sorting and algorithmic fragmentation have degraded shared worlds across demographic lines, communication efficiency between demographically non-overlapping pairs should have declined relative to pairs with overlapping information environments. Controlling for topic knowledge, this would show up as higher encoding effort, more repair episodes, and greater distortion on standardised collaborative tasks. The relevant data exist but have not been analysed with this question in mind.

Prediction 3 — LLM outputs should show a characteristic distortion signature that differs from human outputs in a specific direction. Human communication at the distortion threshold (communication from high shared-world pairs) should exhibit purposeful ellipsis — the compression is efficient because the codebook is rich, and low-information words are safely omitted. LLM outputs at the surface fluency level should exhibit apparent density without genuine compression — high symbol count with low codebook dependency. This difference should be detectable in semantic coherence across referential distance, not just local token probability. Initial work on LLM language structure by the institute (Ricketts, 2026b) supports this pattern; a direct test would require comparing referential coherence at varying contextual distances.

Prediction 4 — The repair-accelerating effect should be observable. If miscommunications that are resolved build common ground faster than smooth exchanges, then dyads exposed to communication noise early in their interaction (but given repair mechanisms) should show faster long-run compression convergence than dyads with smooth early exchange. This is counterintuitive enough to be worth testing directly.

The strongest current evidence for the framework is at the within-dialogue level, where the collaborative reference literature provides robust support. The weakest current evidence is at the population level, where the shared-world fragmentation hypothesis is plausible and consistent with available data, but not directly tested using the compression-efficiency measures the theory requires. The LLM evidence is suggestive but requires more disciplined operationalisation to count as a genuine test. These are specific gaps — not reasons to abandon the framework, but precise targets for future empirical work.


5.7 Connection upward and downward

Linguistic monoculture does not exist in isolation. It is both produced by mechanisms operating at other levels and productive of mechanisms at other levels.

Connection downward to Level 2 (cognitive/sleep). The graph distillation model of sleep requires varied waking input to identify what is structurally important. Language and social communication are among the primary sources of that variation — shared stories, arguments, encounters with difference, the semantic disruption produced by genuine contact with other minds. A communicative environment that has been homogenised by filter bubbles and engagement optimization produces not just degraded communication but degraded waking-hour input to the sleep distillation system. The two levels compound: neurochemical and communicative monoculture together impoverish the input that the sleeping brain needs to do its job.

Connection upward to Level 4 (economic). The degradation of shared worlds undermines the communicative infrastructure required for the collective deliberation and institutional trust that complex economies depend on. More specifically: the relational and social capital that the institute's wellbeing research identifies as a key driver of wellbeing efficiency above and beyond GDP is built through exactly the kind of sustained, high-common-ground communication that shared-world fragmentation degrades. Economic monoculture and linguistic monoculture are not independent phenomena; they share a common institutional driver (the attention economy's engagement-maximisation logic) and reinforce each other through the erosion of the social infrastructure that each depends on.

These connections will be developed in Section 8. The point here is that the levels are not parallel: they are causally coupled, and the coupling runs in both directions. The synthesis claim — that monoculture is a single cross-level problem, not five separate problems that happen to resemble each other — depends on making these couplings explicit and showing that they follow from the shared mechanism.


[Section 6 follows: Economic Monoculture and the GDP Compression Failure.]


Economic Monoculture and the GDP Compression Failure

[Fourth of the five level-specific sections, placed sixth in the paper's running order.]


Overview

Every economy is already a compression system. The trillions of daily decisions, relationships, exchanges, care acts, ecological interactions, and social negotiations that constitute economic life cannot be tracked in full. National accounts compress this dimensionality into tractable representations: prices aggregate preferences, GDP aggregates output, employment statistics aggregate labour market dynamics. Compression is not the problem. The problem is a specific compression failure: the codebook used to represent the economy has been progressively narrowed to the point where it systematically destroys the information most relevant to human flourishing, while optimising for the information most legible to the institutions that designed the codebook.

This section applies the rate-distortion framework to economic measurement and the economic logic it entrenches. The argument is not primarily about measurement — the case that GDP is an inadequate welfare metric is well-established (Stiglitz, Sen and Fitoussi, 2009; Easterlin, 1974). The argument is about mechanism: why does GDP optimisation actively degrade wellbeing, rather than merely failing to maximise it? The rate-distortion framework provides the answer. GDP compression destroys the signal information required to detect, maintain, or recover the interaction effects and network goods on which wellbeing depends. Optimising for the compressed representation then actively drives the economy away from the conditions that produce genuine flourishing.


The template applied

ElementThis level
What is compressedThe full multi-dimensional space of social, relational, ecological, and material activity → national accounts (GDP, employment, inflation, sectoral output)
Input distributionAll economically relevant signals: market transactions, care work, social capital formation, ecological services, relational density, distributional composition, network goods
CodebookThe System of National Accounts (SNA): a representational framework that prices, aggregates, and represents certain activity while rendering other activity invisible by design
Distortion measureLoss of wellbeing-relevant signal: the degree to which the national accounts representation systematically misrepresents the conditions that produce human flourishing

The monoculture claim at this level: the SNA codebook was designed for a specific compression purpose (tracking market activity for fiscal and monetary management) and its application as a general-purpose wellbeing metric introduces systematic distortion that worsens as the economy is optimised toward the compressed representation.


6.1 The GDP compression problem: beyond the measurement critique

The critique of GDP as a welfare measure has a long history and a distinguished roster. Kuznets, who designed the US national accounts in the 1930s, warned explicitly against using GDP as a measure of welfare. The Stiglitz-Sen-Fitoussi commission (2009) produced an authoritative account of what GDP excludes. The Easterlin paradox — the failure of rising GDP per capita to produce rising average life satisfaction in affluent societies — has been documented across multiple decades and methodological approaches.

What is less often stated with precision is the mechanism through which GDP monoculture actively damages wellbeing, as distinct from merely failing to measure it. The rate-distortion framework makes this mechanism explicit.

The SNA compresses economic activity by representing only transactions with prices. This design choice, which was appropriate for its original purpose of tracking fiscal capacity and monetary dynamics, has three consequences that compound under optimisation.

First, the codebook systematically excludes network goods. Many of the goods that most powerfully drive human wellbeing are non-excludable, non-rival, and therefore not efficiently priced: healthy ecosystems, social trust, shared cultural infrastructure, the density and quality of local relationships, community cohesion. These goods are genuinely difficult to price not because economists have not tried, but because their value is largely constituted by their non-market character — pricing social trust transforms the social-exchange norm that generates it. A codebook designed to represent priced goods will therefore exclude exactly those goods whose value cannot survive pricing, and will appear to represent economic activity comprehensively while systematically omitting what matters most.

Second, the codebook destroys compositional and interaction information. Wellbeing does not respond to aggregate expenditure; it responds to expenditure composition and interaction effects among spending categories. This is the institute's central empirical finding from the Global Wellbeing Observatory: across 54 years of data from dozens of countries, wellbeing efficiency — the degree to which economic activity produces wellbeing outcomes above what per-capita income alone would predict — varies dramatically and systematically, in ways correlated with the composition of that activity rather than its aggregate level. Countries with similar GDP trajectories diverge dramatically in wellbeing efficiency depending on how economic activity is distributed across healthcare, education, social infrastructure, inequality, and care work.

These compositional effects are interaction effects in the formal sense: the wellbeing impact of healthcare spending depends on the level of social trust; the wellbeing impact of social trust depends on inequality; the wellbeing impact of inequality depends on the distribution of care infrastructure. GDP aggregation adds these terms without preserving the interaction terms between them. The compression is lossy at exactly the level where the causal structure lives.

Third, optimisation amplifies distortion. A measurement failure becomes a causal failure when institutions optimise toward the compressed representation. Once GDP becomes the target — once fiscal policy, electoral incentives, and institutional design align around maximising aggregate output — the systematic omissions of the SNA codebook become systematic drivers. Activities that generate GDP but destroy social capital (certain forms of financial speculation, predatory lending, attention-fragmenting technology) are rewarded. Activities that generate social capital but not GDP (care work, community building, ecological maintenance) are structurally disadvantaged. The distortion is not static; it compounds over time as optimisation for the codebook progressively hollows out the input distribution it was built to represent.

This is the formal mechanism behind the Easterlin paradox. GDP growth is not merely ineffective at producing wellbeing above a certain income level; in an environment where institutions have been optimised for GDP growth, it actively consumes the social and relational infrastructure that wellbeing depends on. The flatline in average life satisfaction across affluent societies is consistent with, and well explained by, the dynamics described here. This is not claimed as the uniquely sufficient explanation — hedonic adaptation and relative income effects also contribute — but the rate-distortion framework supplies the mechanism for why those patterns have the specific shape and duration they do.


6.2 The complexity economics mechanism

Farmer's complexity economics framework provides the most rigorous available account of why this distortion is not recoverable by better measurement alone.

Standard economic theory, in its DSGE formulation, models the economy as a system of rational representative agents converging toward equilibrium. In this framework, wellbeing can in principle be tracked at the level of individual utility: if each agent is maximising utility, aggregate welfare is the sum of individual utilities, and GDP is a reasonable proxy for the inputs to utility maximisation. The causal structure is additive.

Complexity economics demonstrates that this is the wrong model. The economy is a complex adaptive system in which macroeconomic outcomes — business cycles, inequality dynamics, technological diffusion, social capital formation — are emergent properties of the interactions among heterogeneous agents operating with bounded rationality in social networks. The key implication for the present argument is that emergent properties cannot be recovered from the sum of individual components. As Farmer's agent-based modelling demonstrates, business cycles, poverty traps, and spontaneous inequality emerge from the interaction structure of the economy rather than from the properties of any individual agent or even the aggregate of all agents. These phenomena live in the interaction terms, not the individual terms.

The same logic applies to wellbeing. Wellbeing is not the sum of individual satisfactions; it is an emergent property of the interaction structure of economic, social, and ecological relationships. A community with strong social trust and rich local relationships will produce wellbeing outcomes that cannot be predicted from the per-capita income of its members, because the wellbeing-generating mechanism is the interaction structure rather than the individual endowments. GDP aggregation destroys this interaction information by summing without preserving the network structure that generates emergence. Optimising for GDP then degrades the network structure, because many of the activities that sustain it — care, community, ecological maintenance, unpaid relational work — are invisible to the codebook and therefore unprotected by the optimisation.

This provides the complexity economics answer to why spending composition matters more than aggregate spending, which is the institute's core empirical finding. It is not that health spending is categorically more valuable than other spending, or that social infrastructure spending has a higher multiplier. It is that certain combinations and compositions of spending maintain the interaction structure from which wellbeing emerges, and that the relationship between composition and wellbeing is fundamentally non-additive and context-dependent. No fixed spending rule will capture this, because the optimal composition is path-dependent: it depends on the existing social capital, the current relational infrastructure, and the interaction structure of the community in question.


6.3 The Wellbeing Observatory evidence

The institute's Global Wellbeing Observatory operationalises these claims empirically. The database tracks economic and wellbeing indicators across dozens of countries from 1970 to 2024, permitting analysis of the long-run relationship between economic composition and wellbeing outcomes at a scale and time horizon that most studies cannot achieve.

The headline finding — that wellbeing efficiency (wellbeing outcomes per unit of economic development) varies dramatically and systematically across countries with similar income levels — is directly predicted by the rate-distortion framework. If GDP aggregation destroys compositional and interaction information, then countries at similar income levels will differ substantially in wellbeing efficiency depending on how their economic activity is composed and structured. The variance in wellbeing efficiency at a given income level is not noise; it is the signal that the GDP compression was designed to discard.

The time-series structure of the data permits a further prediction: wellbeing efficiency should diverge from income growth particularly during periods of institutional restructuring that prioritise GDP maximisation over social infrastructure maintenance — the period of neoliberal reform in many OECD economies from the 1980s onward. This is a testable prediction that the existing dataset is positioned to address. It is not claimed here as an established result, but as a prediction that the complexity economics mechanism generates and that future analysis of the Observatory data should test.

More specifically, the rate-distortion framework predicts that the correlation between income growth and wellbeing should be highest in early development (where income growth expands the accessible range of basic goods, maintaining compression fidelity across the full wellbeing-relevant signal space) and lowest in late affluence (where the GDP codebook is furthest from the input distribution that wellbeing depends on). This is broadly consistent with the cross-sectional pattern in the data and with the historical pattern of the Easterlin finding. It is also consistent with a specific prediction about the composition of the most wellbeing-efficient economies: they should maintain higher levels of spending on care, social infrastructure, and ecological maintenance relative to GDP than less efficient economies at similar income levels — because these are the inputs to the interaction structure the codebook cannot see.


6.4 What counts as distortion at the economic level

The distortion measure at this level requires care. The claim is not that GDP is a bad measure of output — it is not; it measures what it was designed to measure with reasonable accuracy. The claim is that when GDP functions as the compression representation for wellbeing, it introduces systematic and compounding distortion.

The relevant distortion measure is: how much wellbeing-relevant information is destroyed in the compression from the full economic signal space to the GDP representation? This can be operationalised at three levels.

At the aggregate level: How much of the cross-national variance in wellbeing outcomes is accounted for by GDP, and how much is left unexplained? The Easterlin literature suggests the unexplained variance is large and systematic. The Observatory data permit a more precise quantification.

At the compositional level: How much of the variance in wellbeing efficiency across countries and time periods is accounted for by the composition of economic activity rather than its aggregate level? The preliminary finding that composition matters more than aggregate is a distortion measure: it quantifies how much information the aggregation step discards.

At the institutional level: How much do governance and institutional responses diverge from what the complexity economics model would predict is optimal, as a result of optimising for GDP rather than for the full input distribution? This is the hardest to measure but potentially the most important: it captures the compounding effect of optimising for the distorted codebook, which over time reshapes the economy in its own image, progressively suppressing the interaction structure it cannot represent — though the pace and reversibility of this suppression are empirical questions the Observatory data is positioned to begin addressing.


6.5 Evidential status and what would disconfirm this argument

The argument at this level rests on three tiers of evidence with different strengths.

The Easterlin finding — that GDP growth above a threshold does not produce commensurate wellbeing gains — is well-established and replicable. The complexity economics framework — that emergent properties cannot be recovered from aggregate optimisation in non-linear systems — is theoretically well-grounded and supported by agent-based modelling results. The institute's wellbeing efficiency finding requires more precise operationalisation than it has yet received in published form.

Wellbeing efficiency is constructed as the residual wellbeing outcome — measured through composite indicators including life satisfaction, healthy life expectancy, social trust, and income security — above or below what a country's GDP per capita level would predict from cross-national regression. A high-efficiency country is outperforming its income-predicted baseline; a low-efficiency country is underperforming it. The critical finding is that this efficiency measure varies dramatically and systematically, and that the variance is correlated with the composition of economic activity rather than with the aggregate level. The compositional variables currently tracked in the Observatory include social protection and healthcare expenditure as a share of GDP, inequality measures (Gini coefficient, income share ratios), and indicators of institutional trust and care infrastructure. The theoretical argument suggests that the ratio of social infrastructure, care, and ecological spending to aggregate consumption is the most relevant compositional cluster; making that cluster precise enough for systematic cross-national comparison at the 54-year scale is part of the empirical work currently in development. The compositional correlation as currently measured constitutes a preliminary confirmation of the interaction-effects mechanism; the more fully specified test awaits that operationalisation.

Two classes of finding would substantially weaken the argument. The first is genuine external confounds: if the cross-national variance in wellbeing efficiency were primarily explained by culture or geography — variables external to the economic system being described — the mechanism claim would lose support even if the theoretical argument remained intact. The second is failed natural experiments: if economies that have structurally de-prioritised GDP optimisation in favour of broader wellbeing metrics (New Zealand's Wellbeing Budget, Iceland's wellbeing governance framework) showed no improvement in efficiency over time, the institutional implication would be undermined.

A third frequently cited alternative — governance quality — is not straightforwardly a confound in the same sense. Governance quality is, to a significant degree, endogenous to the economic codebook: institutions that measure success by GDP targets develop governance cultures shaped by those targets. Governance quality may therefore be part of the mechanism rather than a rival explanation for it. Separating governance quality as an independent variable from governance quality as a downstream consequence of codebook choice is technically demanding and represents one of the more important analytical tasks the full empirical programme requires.

The claim least hostage to empirical findings is the formal one: that scalar aggregation in non-additive systems can systematically obscure, and under optimisation can progressively suppress, the interaction structure relevant to emergent properties. This is not a universal theorem — it depends on properties of the specific system being aggregated — but it is a consequence of the mathematics of non-linear systems that is well established in complexity science and that the agent-based economic results reviewed in 6.2 instantiate concretely. The empirical question is whether the economy exhibits the non-additive interaction structure the argument requires — and the complexity economics literature provides strong grounds for an affirmative answer.


6.6 Cross-level connections

The economic level is upstream of the other four in one specific and important direction: the incentive architecture of the attention economy is not a cultural accident or an autonomous development in technology. It is a predictable product of a market-institutional order in which engagement is monetisable and neurobiological or social damage is not — an order that uses GDP and its constituent accounting categories as a primary legitimating measure of success.

To be precise about the layering here: the SNA does not itself produce the attention economy. The SNA is a representational framework designed for fiscal and monetary management; it has no independent causal power. What produces the attention economy is a broader market-institutional system in which: (1) advertising revenue accrues in national accounts as a positive contribution to GDP; (2) platform valuation is treated as wealth creation; (3) transaction volume from engagement-driven commerce registers as output — while the neurochemical costs of sustained variable-ratio stimulation, the cognitive costs of attentional fragmentation, and the social costs of shared-world degradation appear nowhere in the accounts. The SNA codebook, through its institutional use as a general-purpose optimisation target by fiscal authorities, capital markets, and regulatory frameworks, legitimises and reproduces this asymmetry. The issue is not the codebook's existence but its promotion from descriptive tool to optimisation criterion across institutional domains it was never designed to govern.

The implication for the other levels is real, if indirect. When attentional engagement is economically rewarded and neurobiological range contraction is economically invisible, market selection systematically favours the design choices that produce neurochemical monoculture. When social capital formation is unpriced and social capital destruction is unremarked, the investment calculus systematically underweights relational infrastructure. When shared-world maintenance has no GDP coefficient and shared-world fragmentation generates advertising revenue, the institutional incentives reliably favour fragmentation. These connections are each specifiable and each testable; they are not a single undifferentiated claim that "GDP causes everything."

The upward connection to Level 5 (governance) is more direct. Governance institutions evaluated by GDP metrics are doing to policy what the SNA does to economic reality: compressing a complex adaptive system into a narrow metric set and then optimising for the metric set. The brittleness of governance response to novel challenges — pandemic mismanagement, climate policy delay, AI safety failures — follows from the same logic in each case. The metric set does not represent the full input distribution of the governed system. Signals outside the metric set accumulate until they produce consequences that enter the metric set. The institution responds to consequences rather than to signals, because the signals were never in the codebook. Section 7 develops this argument.


[Section 7 follows: Institutional Monoculture and the Governance Compression Failure.]


Institutional Monoculture and the Governance Compression Failure

[Fifth and final level-specific section, placed seventh in the paper's running order.]

Overview

The governance level is where the monoculture problem becomes most immediately consequential for collective action. When nervous systems, sleeping brains, language communities, and economies are all exhibiting the compression failures described in prior sections, the institutional systems responsible for detecting and responding to these failures are the last backstop. What this section shows is that those institutions are themselves running on monoculture — and that their compression failure is structurally constitutive of all the others.

Governance monoculture is the compression of the complex adaptive systems that institutions govern into a narrow metric set, followed by optimisation toward that metric set. The damage this produces has the same formal structure as at every other level: the metric set does not represent the full input distribution; the institution optimises for the metric set; signals outside the metric set accumulate; the institution responds to consequences rather than signals. But at the governance level, the stakes are asymmetric — institutional compression failures do not only harm the institutions themselves, they prevent the detection and remediation of compression failures at all other levels.


The template applied

ElementThis level
What is compressedThe full signal space of a governed complex adaptive system → a metric set of actionable indicators (GDP, crime rates, test scores, unemployment, engagement metrics)
Input distributionAll signals relevant to the adaptive state of the governed system: including informal, relational, cultural, ecological, slow-moving, and hard-to-quantify signals alongside the measured ones
CodebookThe institutional model of what counts as a signal worth acting on: the implicit representational framework embedded in measurement choices, reporting requirements, and incentive architectures
Distortion measureLoss of adaptive capacity: the degree to which the institutional response fails to detect, interpret, or act on signals that matter but are outside the metric set

7.1 Static governance and the DSGE analogy

Standard economic policy operates within a framework that Farmer's complexity economics directly challenges: Dynamic Stochastic General Equilibrium modelling assumes rational representative agents converging toward a predictable equilibrium. Perturb the system with an exogenous shock; calculate the return path; implement the optimal policy. This works when the environment is stable, the shocks are of familiar types, and the relevant causal structure is captured in the model.

Governance institutions outside economics have adopted structurally equivalent assumptions. The prediction-based policy paradigm identifies equilibrium social states, models the effect of interventions, and attempts to steer the system toward the predicted optimal. Public health policy models disease transmission and optimises vaccination rates. Educational policy models achievement drivers and optimises test outcomes. Urban policy models traffic flow and optimises road throughput. The paradigm is coherent, productive for well-understood problems, and catastrophically insufficient for genuine novelty.

The compression failure here is precise: the institutional codebook was built on the basis of past signals and past causal relationships. It can detect and respond to perturbations within the class of signals its model anticipates. It cannot detect signals that are structurally outside its model, because those signals do not appear in the metric set, and the metric set is what the institution watches. When novel signals accumulate — the early epidemiological patterns of a novel pathogen, the slow-moving precursors of a financial crisis, the gradual erosion of social cohesion that precedes political fracture — they are invisible to the governance system until they produce consequences that enter the metric set by force.

This is not a failure of intelligence, goodwill, or resource. It is a structural consequence of running a complex adaptive system through a codebook calibrated for a different environment. The institution has been optimised for the codebook; the codebook does not represent the full input distribution; novel signals are outside the codebook; the institution responds to consequences rather than signals.


7.2 The AI safety case as governance monoculture

The institute's prior work on AI safety (Kristensen and Ricketts, submitted) provides an unusually clean instance of governance monoculture in a contemporary and rapidly evolving domain.

The dominant AI safety paradigm focuses on technical alignment: ensuring that model behaviour conforms to intended goals and constraints at the level of training. This work is necessary, but the paper's analysis of deployed AI systems finds a different pattern: many near-term AI harms are produced not by model failures in the alignment sense but by deployment environments in which engagement-compatible metrics — fluency, retention, perceived helpfulness, seamless task completion — systematically underreward safer behaviour. Provenance markers, uncertainty signalling, staged autonomy, and high-risk refusals are all technically feasible; all are weakly deployed; all are weakly deployed because they impose recognisable costs on the metrics the organisations are optimising.

In the governance monoculture framework: the institutional codebook (the metric set of growth, retention, satisfaction, and perceived capability) does not represent the full signal space relevant to AI safety (which includes calibration, epistemic humility, appropriate refusal, and the long-run costs of over-reliance and misinformation). The institution optimises for its codebook. Signals outside the codebook — the harm accumulation patterns that do not register as product failures under engagement metrics — are invisible until they produce consequences that enter the metric set through regulatory pressure, reputational crisis, or legal liability.

The AI safety case generalises directly to pandemic response, climate governance, and financial regulation. In each case: the institutional metric set was designed for a different environment; novel signals accumulated outside the metric set; the institution responded to consequences rather than signals; the damage was in each case substantially greater than it would have been had the institution been watching the right signals.


7.3 Anti-fragility as the design response

The institute's published work on governance (Ricketts, 2025; Anti-Fragile Well-Being, DOI: 10.65638/2978-882X.2025.01.05) proposes anti-fragility as the design principle for institutions operating under genuine uncertainty. An anti-fragile system is not merely resilient — it does not simply return to its prior state after disruption. It improves its adaptive capacity through exposure to variability, provided the variability is within ranges that can be survived and learned from.

Anti-fragility as governance design means: building institutions that maintain signal diversity rather than optimising it away; that reward the detection of weak signals outside the current metric set rather than penalising the attention cost they impose; that treat adaptive surprise not as a failure of prediction but as information; and that maintain redundancy and variety in their response repertoire rather than standardising on the most efficient tool for the most anticipated problem.

The Virtual Living Lab (VLL) concept, developed in the institute's adaptive governance work, operationalises this at the policy testing level: a simulation and pilot environment that allows governance interventions to be tested across the full signal distribution of a governed system — including the signals the formal metric set does not capture — before deployment at scale. The VLL is the governance equivalent of the sleep distillation operator's full experience trace: it maintains signal diversity in the testing environment so that the compressed policy response that emerges from it is not calibrated exclusively for the signals already in the institutional codebook.


7.4 The institutional monoculture spiral

There is a compounding dynamic at the governance level that does not appear with the same force at other levels. At Level 1, neurochemical monoculture does not make the nervous system more committed to monoculture; it simply narrows the range. At the governance level, institutional monoculture actively reproduces itself: the metric set determines what counts as success, what counts as success determines what activities are funded and evaluated, what activities are funded determines what expertise is developed, and what expertise is developed determines what signals the institution is even capable of detecting. Each cycle of optimisation makes it harder for the institution to see outside its codebook.

This spiral explains why governance monoculture is particularly resistant to incremental reform. An institution optimised for GDP cannot begin tracking wellbeing efficiency without challenging the legitimacy of its own performance metrics. An AI company optimised for engagement cannot deploy uncertainty signalling without making the performance costs visible in precisely the metrics its investors are watching. A public health system optimised for treatment metrics cannot invest in prevention and social determinants without generating apparent budget inefficiency in the measurement framework it is accountable to.

The governance level is therefore where the monoculture problem has the highest institutional stakes — not because governance compression failure is intrinsically worse than neurochemical or linguistic failure, but because the governance system is the one responsible for detecting and responding to all the others. Its compression failure does not make correction impossible, but it substantially raises the threshold at which correction becomes politically and institutionally tractable: the signals that would prompt reform are precisely the ones outside the metric set that the institution is not watching.


7.5 Evidential status and cross-level connections

The empirical evidence at this level is strongest for specific institutional case studies (AI safety deployment patterns, pandemic response failures, financial regulation gaps) and weakest for the general formal claim that governance institutions function as compression systems in the rate-distortion sense. That general claim rests on the formal argument developed in the theory section, instantiated through the case evidence, rather than on direct measurement.

The strongest cross-level connection is to Level 4: governance institutions that measure success by GDP inherit the economic codebook's compression failures and amplify them through policy. The second strongest is upward to the paper's synthesis: governance monoculture is the systemic condition that makes the other four levels' monocultures self-reinforcing rather than correctable. An institution watching the right signals could, in principle, detect neurochemical monoculture, sleep disruption, shared-world fragmentation, and the divergence of wellbeing from economic output. The governance monoculture ensures it is not watching those signals — until the consequences are unavoidable.



The Cross-Level Synthesis

8.1 Instances, not analogies: the causal architecture

The preceding five sections have demonstrated the monoculture problem at each level independently. This section makes the synthesis claim: that the five levels are not analogous instances of a shared pattern but are causally coupled instances of the same class of failure — and that understanding the coupling is necessary both for diagnosis and for intervention design.

The causal architecture proposed here involves three principal connections. Each is supported at different evidential levels and each generates specific testable predictions distinct from the level-specific predictions already developed.

Connection 1 — Economic-institutional to attention environment (Levels 4+5 → Levels 1, 2, 3). The attention economy — the commercial architecture of variable-ratio engagement, platform-mediated communication, and algorithmically personalised content — is a product of a market-institutional order that prices engagement and does not price neurobiological or social cost. This connection is the most empirically direct: the mechanisms (advertising revenue accrues in national accounts, platform valuation is treated as wealth creation, neurobiological cost does not appear in accounts) are specifiable, and their consequence for incentive design is traceable through standard institutional analysis. It does not require claiming that GDP causes neurochemical monoculture; it requires only that the institutional order that GDP legitimises rewards the design choices that produce neurochemical monoculture, and that this creates a systematic selection pressure rather than an incidental one.

Connection 2 — Lower levels feedback upward (Levels 1, 2, 3 → Levels 4, 5). Neurochemical monoculture and shared-world fragmentation plausibly reduce the deliberative and communicative capacity required for collective institutional reform. A population with chronically narrowed experiential range, degraded sleep consolidation, and thinning common ground has reduced capacity for the kind of sustained, cross-difference collective deliberation that meaningful policy change requires. This connection is more inferential: it is grounded in the functional descriptions at each level rather than in direct measurement of the proposed pathway, and it should be read as a framework prediction rather than an established finding. The specific prediction it generates — that populations in higher-monoculture environments show reduced capacity on measures of deliberative quality and cross-group communicative efficiency — is testable, and that test has not been conducted.

Connection 3 — Institutional monoculture reproduces itself (Level 5 internal). The most internally documented coupling is the spiral within the governance level: the metric set determines what counts as success, which shapes what expertise is funded, which determines what signals the institution can detect. This is not a cross-level coupling but an intra-level feedback that the governance literature on measurement fixation and institutional path-dependence provides empirical support for. It is included in the synthesis because it explains why the governance monoculture is particularly resistant to incremental reform — reform must challenge the legitimacy of the metrics by which reform itself would be evaluated.

The three connections together produce a dynamic in which several of the level-specific monocultures tend to reinforce each other through the mechanisms specified above. This is a weaker claim than "everything causes everything" — the connections are specific, differentially evidenced, and generate distinct predictions. It is also a stronger claim than level-specific analysis alone supports: the levels are not independent, and interventions that ignore their coupling are likely to be offset by the coupling dynamics they do not address.

8.2 The common mechanism stated precisely

The rate-distortion framework now permits the cross-level claim to be stated with precision rather than as a structural metaphor.

At every level, a compression mechanism (the neuromodulatory system, the sleep distillation operator, the language community's shared codebook, national accounting, institutional metric sets) operates on an input distribution (the experiential signal environment, the waking experience trace, the space of communicable meanings, the full space of social and economic activity, the full signal space of governed systems). Late modernity's dominant optimising logic — engagement maximisation, GDP growth, prediction-based equilibrium policy — systematically narrows the effective input distribution that each compression mechanism receives, while degrading the shared representational frameworks (codebooks) that compression efficiency depends on.

The damage profile is structurally similar across levels — not immediate collapse but progressive brittleness: reduced range of response, reduced novelty tolerance, and progressive atrophy of adaptive capacity for signals outside the narrowed input distribution. The characteristic failure mode varies in its specific manifestation at each level (structural anhedonia at Level 1, shallow memory consolidation at Level 2, communicative degradation at Level 3, the Easterlin divergence at Level 4, governance brittleness at Level 5) but shares a common signature: the system continues to perform within its impoverished input range while losing the capacity to respond to signals outside it, until those signals produce consequences that cannot be ignored.

The solution type is therefore also consistent across levels: restore signal diversity to the compression mechanism that requires it. At the neurochemical level, this means restoring the ecological and temporal conditions for the neglected architecture's activation. At the cognitive level, it means maintaining the experiential variety that sleep compression requires. At the linguistic level, it means investing in the shared worlds and common ground that language compression depends on. At the economic level, it means redesigning the codebook to preserve interaction information rather than destroy it. At the governance level, it means building institutions that reward signal diversity rather than penalising it.

8.3 Why level-specific interventions are insufficient when considered alone

The cross-level coupling has an important implication for intervention: addressing the problem at any single level is likely to be insufficient if the other levels continue generating it, and the coupling dynamics are strong enough to offset level-specific remediation.

A public health intervention that restores neurochemical range — reduced screen time, nature exposure, protection of unstructured time — is working against an economic and institutional architecture that systematically generates the conditions it is trying to remedy. The intervention must be repeated indefinitely to offset a structural force it does not address. An economic measurement reform that incorporates wellbeing efficiency is working against the governance institutions calibrated to the codebook it is trying to replace, and against the political economy of organisations whose legitimacy depends on GDP performance. A governance reform that builds signal diversity into policy evaluation appears as friction — delay, additional cost, reduced measurable output — within the metric frameworks by which governance reform is evaluated.

None of this makes level-specific intervention futile. The empirical research programme outlined in Section 9 is worth pursuing at each level on its own terms. But the synthesis argument implies that the largest leverage points are likely to be at the level of the coupling mechanisms themselves — particularly the institutional incentive architecture that simultaneously selects for narrow codebooks across multiple levels, and the design choices in the attention environment that simultaneously drive neurochemical monoculture, sleep input impoverishment, and shared-world fragmentation through a common commercial logic.



Implications for Design, Policy, and Education

9.1 The design principle: appropriate range

The paper's argument implies a design principle that is simpler to state than to implement: design for appropriate range. Not maximal complexity — monoculture's opposite is not chaos — but the restoration of signal diversity to the level that each level's compression mechanism requires to function well.

This principle is actionable at each level with different specificity.

At the neurochemical level, appropriate range means environmental design that restores the three convergent activation conditions — slowness, physical embodiment, sustained attention — to the experiential baseline. This is less a clinical intervention than an architectural and temporal one: the design of built environments that encourage physical engagement, the protection of unstructured time in institutional schedules, the design of digital interfaces that support sustained attention rather than fragmenting it, and the cultural normalisation of the states — grief, awe, boredom, meditative absorption — that late modernity's stimulation architecture treats as deficits.

At the cognitive level, appropriate range means educational and workplace design that maintains experiential variety as an explicit input-quality requirement for cognitive health. This is the sleep medicine implication of the graph distillation model: intervention on waking experience quality, not only on sleep quantity and hygiene, is necessary for the distillation process to perform well.

At the linguistic level, appropriate range means investment in shared worlds as cognitive infrastructure. This includes educational investment in common cultural knowledge, deliberate design of public information environments that maintain shared reference rather than fragmenting it, and the institutional prioritisation of communicative contexts that build common ground across difference — not because diversity is intrinsically valuable but because shared world is the compression machinery that communication depends on.

At the economic level, appropriate range means codebook reform: the redesign of national accounting to preserve compositional and interaction information. The Wellbeing Observatory provides the empirical infrastructure for this, and the analysis of wellbeing efficiency over the 54-year dataset can serve as the evidentiary basis for specific accounting reforms. The principle is not to add more metrics but to preserve the interaction structure that aggregate metrics destroy.

At the governance level, appropriate range means anti-fragile institutional design: institutions that maintain diversity in their signal set, that reward the detection of weak signals outside the current metric set, that treat adaptive surprise as information, and that build testing environments (such as the VLL) that preserve the full input distribution rather than optimising it away before policy is deployed.

9.2 The liberal arts implication

There is an institutional implication of this argument that is specific to the setting in which it was developed. The AI+Wellbeing Institute is based at a liberal arts university, and the argument of this paper is not merely consistent with the liberal arts model — it provides a theoretical grounding for why that model is epistemically necessary for addressing the monoculture problem.

The monoculture problem cannot be diagnosed from within any single discipline. It requires holding neuroscience, information theory, linguistics, economics, and institutional design simultaneously — not as a rhetorical gesture toward interdisciplinarity, but as a functional requirement for seeing the cross-level mechanism. A neuroscientist cannot see the linguistic monoculture that amplifies neurochemical range contraction. An economist cannot see the sleep compression failure that degrades the cognitive architecture that institutional reform requires. A governance theorist cannot see the neurobiological consequences of the metric set their institution has chosen.

What liberal arts training produces — the capacity to hold multiple disciplines in productive tension, to trace mechanisms across levels, to resist the specialisation pressure that makes cross-level diagnosis impossible — is not merely culturally enriching. It is the epistemic infrastructure required to see and address the monoculture problem at all. A curriculum that builds integrative capacity across neuroscience, economics, and linguistics is producing graduates who can do something that no specialist can do: diagnose compression failures at the level of the whole system.

This is the AI+Wellbeing Institute's institutional argument, grounded in the theory rather than asserted as a value claim.

9.3 The AI and technology design implication

The argument has specific implications for AI and technology design that extend beyond the safety paper's institutional framing.

First: AI systems trained and deployed for engagement maximisation are not neutral tools that happen to be misused. They are compression systems designed with a codebook optimised for the signals that the economic-institutional complex rewards. Their deployment at scale is a driver of shared-world fragmentation (Level 3), neurochemical monoculture (Level 1), and the institutional monoculture that prevents governance response (Level 5). The design of AI systems that maintain rather than erode the compression infrastructure they operate within is not an optional ethical consideration; it is a functional requirement for systems that are supposed to support rather than degrade human adaptive capacity.

Second: the LLM architecture — compression of statistical language surface without access to shared world — is not merely a current limitation awaiting resolution by scale. It is a structural prediction of the rate-distortion framework that language without shared world will produce a characteristic distortion profile: local coherence, global shallowness, confident assertion without epistemic warrant. This profile will not be resolved by more parameters. It requires architectural decisions about how shared world is represented and maintained in the system — which means decisions about the relationship between model training and genuine social grounding.

9.4 The research programme

The paper's empirical gaps are specific and tractable. In priority order:

  1. Naturalistic measurement of kappa-opioid receptor occupancy and hedonic baseline in proportion to engagement-architecture exposure (Level 1).
  2. Experimental test of input-quality dependency in sleep consolidation: does experiential variety during waking hours improve post-sleep graph utility on explicitly structured learning tasks? (Level 2).
  3. Population-level test of shared-world fragmentation using compression-efficiency measures: do algorithmically sorted pairs show lower compression convergence in collaborative tasks than non-sorted pairs? (Level 3).
  4. Full specification and cross-national testing of the wellbeing efficiency compositional correlation, using the Observatory dataset and pre-registered interaction-effects model (Level 4).
  5. Longitudinal analysis of wellbeing efficiency in economies that have institutionalised alternatives to GDP optimisation (Level 4).
  6. Development and piloting of the Virtual Living Lab as a governance signal-diversity testing environment, with pre-registered comparison of policy outcomes against standard prediction-based deployment (Level 5).


Conclusion

The 1970 corn blight did not require a new pathogen. It required a monoculture — a system so optimised for a single high-yield variant that one fungus could consume the whole of it. The lesson generalised from agriculture to any system that discards redundancy in favour of efficiency: what is pruned away to maximise throughput on the chosen metric is often exactly what the system needed to respond to things the metric did not anticipate.

This paper has argued that late modernity is running that experiment simultaneously at five levels of human organisation, through a structurally similar mechanism at each level: the narrowing of signal diversity in compression systems whose adaptive function depends on that diversity. The neuromodulatory architecture is being activated across a shrinking portion of its functional range. The sleeping brain is consolidating impoverished input and producing shallower, less modular memory. Language communities are losing the shared worlds through which communication efficiently carries meaning. Economies are being represented by a scalar that discards the interaction structure where wellbeing actually lives. Governance institutions are watching a narrow metric set while signals outside it accumulate into the crises they then manage too late.

The formal unification that rate-distortion theory provides is not merely elegant. It is productive. It permits the claim that the five levels are instances of the same problem rather than analogies of each other. It specifies what would confirm and disconfirm the argument at each level. It identifies the common solution type across all five — restore signal diversity to the compression mechanism that requires it — and it makes clear why level-specific interventions are insufficient without addressing the common institutional driver.

The paper does not claim to have solved the monoculture problem. It claims to have identified it with enough precision that the next steps — empirical, institutional, and in design — can be specified and pursued. The work ahead is substantial. But it is now, at least, the right work.


Summary for a general audience

Human beings are complex systems. Our brains, our relationships, our economies, and our institutions all work by compressing a complicated world into manageable representations — and they do this well only when the world they are compressing is genuinely varied and rich.

Late modernity is systematically removing that variety. The same economic and technological logic that optimises for engagement, growth, and measurable output is simultaneously narrowing the signal environments that human systems depend on. Our nervous systems are being run on three chemical channels in an architecture built for a hundred. Our sleeping brains are compressing a narrower and narrower range of experience and producing shallower memories. Our language communities are losing the shared worlds through which meaning is efficiently transmitted. Our economies are being measured by a number that destroys the information about relationships and care that actually drives flourishing. Our governance institutions are watching narrow metric sets while the signals that matter accumulate outside the frame.

This paper calls this the monoculture problem — borrowing from the 1970 corn blight, in which decades of optimisation for a single high-yield variant left the entire US corn crop vulnerable to a single pathogen. Monocultures are efficient until they encounter what they pruned away.

The paper shows that these five problems — neurochemical, cognitive, linguistic, economic, and institutional — are not separate trends. They are instances of the same failure, connected by the same mechanism, and requiring the same solution type: restore the signal diversity that complex systems need to remain adaptive, resilient, and genuinely capable of supporting human life.

Data availability

The Global Wellbeing Observatory dataset (1970–2024) is available at ai-well-being.com. Source working papers for all five levels are accessible at the same address. No new empirical data were generated for this synthesis.

Acknowledgements

The author thanks collaborators at the University of Tokyo and University of Melbourne for contributions to the adaptive governance and wellbeing economics work underpinning Sections 6 and 7. The sleep distillation framework was developed in collaboration with S. Jhingan. The AI safety institutional analysis was developed in collaboration with E. Kristensen.

Author contributions

J.R. conceived the synthesis framework, developed the rate-distortion account across all five levels, wrote the paper, and directed the AI+Wellbeing Institute research programme from which the empirical work derives.

Competing interests

The author declares no competing interests.

References

  1. 1. Shannon CE (1948). A mathematical theory of communication. Bell System Technical Journal 27:379–423.
  2. 2. Berger T (1971). Rate Distortion Theory. Prentice-Hall, Englewood Cliffs NJ.
  3. 3. Farmer JD (2024). Making Sense of Chaos: A Better Economics for a Better World. Allen Lane, London.
  4. 4. Easterlin RA (1974). Does economic growth improve the human lot? In: David PA, Reder MW (eds) Nations and Households in Economic Growth. Academic Press, New York, pp 89–125.
  5. 5. Stiglitz JE, Sen A, Fitoussi JP (2009). Report by the Commission on the Measurement of Economic Performance and Social Progress. Paris.
  6. 6. Feld GB, Bernard C, Rawson NE, Spiers HJ (2022). Sleep preferentially consolidates memory for highly connected nodes in an explicitly learned graph. Current Biology 32:R476–R477.
  7. 7. Clark HH, Wilkes-Gibbs D (1986). Referring as a collaborative process. Cognition 22:1–39.
  8. 8. Taleb NN (2012). Antifragile: Things That Gain from Disorder. Random House, New York.
  9. 9. Meadows D (2008). Thinking in Systems. Chelsea Green Publishing, White River Junction VT.
  10. 10. Ricketts J, Jhingan S (2026). Sleep as graph distillation: a formal framework for memory consolidation as resource-constrained representational optimisation. Working paper, AI+Wellbeing Institute.
  11. 11. Ricketts J (2026a). We speak through shared worlds: a rate–distortion view of human language. Working paper, AI+Wellbeing Institute. Available at shared-worlds.netlify.app.
  12. 12. Ricketts J (2026b). More human than human: LLM creativity via local language structure. Working paper, AI+Wellbeing Institute. Available at more-human-than-human.netlify.app.
  13. 13. Ricketts J (2025). Anti-fragile well-being: a cultural systems framework for adaptive public policy. Wellbeing Futures 1. DOI: 10.65638/2978-882X.2025.01.05.
  14. 14. Kristensen E, Ricketts J (submitted). Safety beyond the model: interaction design, platform incentives, and institutional failure in deployed AI. Working paper, AI+Wellbeing Institute.
  15. 15. Coupé C, Oh YM, Dediu D, Pellegrino F (2019). Different languages, similar encoding efficiency. Science Advances 5:eaaw2594.
  16. 16. Kuznets S (1934). National Income, 1929–1932. Senate Document No. 124. US Government Printing Office, Washington DC.
  17. 17. Ricketts J (2026c). Neurochemical monoculture and the contraction of human range. Working paper, AI+Wellbeing Institute. Available at nmc1.netlify.app.