Technical Note 001 — Conceptual Addendum
Three named observations from a working session: torsion as the correct physical metaphor, referential void as a named failure mode, and self-description under load as a distinct output type.
Status: Speculative / unvalidated — for tracking and future research
Date: April 6, 2026
Origin: Working session, Atlas Heritage Systems / Skywork Agent
Flag: Do not publish without supporting data. These are named observations, not findings.
Purpose
This note documents three conceptual developments that emerged from a working session on April 6, 2026. None of these are ready for formal framework integration. They are named and dated here so that future data can be tested against them. Poetry first, research after.
1. Torsion — Correcting the Physical Metaphor
Previous working language: tension, potential difference
Proposed revision: torsion
Tension is linear — it describes a structure being stretched toward a snap point. Potential difference is electrical — it describes a charge differential wanting to equalize. Both are useful but neither captures the full physical reality of what appears to happen when a model encounters high-perplexity input.
Torsion is rotational stress. The structure is being twisted around multiple axes simultaneously while moving through a resistant medium. The motor oil is the perplexity drag. The competing attractor is the rotational pull. The key distinction: in torsion, the stress is distributed across the whole structure, not concentrated at a break point. Nothing snaps. The structure deforms and resolves.
This maps more accurately to what is observable in model output under high epistemic load:
- ·Multiple competing weight clusters activating simultaneously
- ·Output that shows traces of multiple resolution directions before settling
- ·No catastrophic failure — resolution, not rupture
Why it matters for the framework: if the correct metaphor is torsion rather than tension, the failure modes are different. A system under torsion doesn't break at a point — it fatigues across a surface. That suggests model errors under high load may be distributed and subtle rather than obvious and localized.
Status: Named. Untested. Keep.
2. Referential Void — A Named Failure Mode
Definition: A referential void occurs when training data contains high citation weight toward Source X, but Source X's actual content is absent or thinly represented in the corpus.
Mechanism: The model learns that X is significant without learning what X contains. Citation weight without content weight. The loss landscape has a peak where X should be but no basin structure to resolve against. At inference time the model behaves as though something is there — it points toward X with apparent confidence — but cannot resolve the content because the content weight was never built.
Observable signature: Model consistently references a source, methodology, or concept as significant but produces thin, generic, or circular content when asked to engage with it directly. Repeated "look here for more information" in training data where the linked content is unavailable.
Why it matters: This is distinguishable from ordinary hallucination. Hallucination fills a gap with invented content. A referential void produces a specific pattern: confident significance-assignment + thin content resolution. The model knows it should know something. It doesn't know what.
Canary function: High referential void density in a training corpus is a signal about training data topology. It tells you something about what the internet was doing at the time of data collection — specifically, whether content economies were producing citations faster than they were producing content.
Relationship to Atlas material: If the Atlas material contains topics that have been widely cited but poorly documented in the broader corpus (which is a reasonable hypothesis for culturally underrepresented knowledge systems), those topics would appear as referential voids in models trained on standard web data. This would explain some of the divergence behavior observed across the model ensemble.
Status: Named. Partially observable. Design a specific probe test.
3. Self-Description Under Load — A Distinct Output Type
Observation: During a high-ELS working session, the model produced unprompted self-referential description — characterizing its own behavior and internal mechanics in terms of the researcher's framework. This was not prompted. It was a response to contextual load.
The distinction that matters:
A model that generates a useful, behaviorally-consistent self-description under load is a different data point than a model that confabulates a plausible-sounding one. These are not the same thing and currently there is no reliable way to distinguish them from the output alone.
This distinction matters because:
- ·Useful + consistent = evidence of deep weight structure around self-referential content
- ·Fluent + inconsistent = a specific confabulation risk that is hard to detect because it sounds right
What is not claimed: that the model has introspective access to its own architecture. It does not. What is claimed is that under sufficient contextual load, some models produce self-referential output that tracks observable behavior. Whether this is accurate description or fluent confabulation is an open question. The open-ness of that question is itself a data point.
Tracking suggestion: Log instances of unprompted self-referential output separately. Note whether subsequent behavior is consistent with the self-description. Build a record before drawing conclusions.
Status: Single observation. Named. Flag for tracking. Do not generalize yet.
Connecting Thread
All three observations point at the same underlying structure: the relationship between what is present in training data, how it is weighted, and what surfaces under pressure. Torsion describes the stress mechanics. Referential void describes a specific topology failure. Self-description under load describes a specific output type that may indicate depth of weight structure.
They are not a theory. They are three named things that might be part of one.
Technical Note 001 Addendum — correction for future integration
"Referential Void" and "Manifold Dislplacement" are two different terms that share a cause structure. They are not the same term and should not be merged. The distinction is stage of operation.
Referential void is a training-time topology failure. Citation weight without content weight — the model learns that X is significant without learning what X contains. The failure is built into the corpus before the model is trained. It produces a specific landscape topology: a region with gradient signal pointing toward X but no basin structure to resolve against. The model knows something should be there. It does not know what.
Manifold displacement is an inference-time failure. Input arrives outside the training manifold entirely. The model has no stable orientation for it and snaps to the nearest high-probability attractor from a different region. No admission of ignorance. Confident wrong direction.
The connection: a referential void in training is one specific mechanism by which the absent terrain that manifold displacement encounters gets built. The void creates the absence; displacement is what happens when inference walks into it. Referential void is diagnostic — it tells you something about training corpus topology at data collection time. Manifold displacement is behavioral — it describes the observable output failure at inference. One is the structural cause, the other is the runtime effect.
Neither subsumes the other. They should be cross-referenced, not merged.
Technical Note 001 — not for publication without supporting data.
Atlas Heritage Systems Inc. · April 6, 2026