Loss Landscape Vocabulary Framework

v13 · April 2026 · Atlas Heritage Systems · Working document — not a finished product

A note before the math

You don't need to understand any of this to read the Framework. But if you want to know why the Framework is built the way it is, the math is where the answer lives.

When a language model trains, it moves through a mathematical landscape — hills, valleys, flat plains — searching for the lowest point. The vocabulary on this page describes the features of that terrain: what makes one region harder to cross than another, what gets preserved in the difficult parts, and what gets smoothed away in the easy ones.

The archaeological claim Atlas makes is simple: the hard parts leave marks. Those marks are readable. That's what the instruments are built to find.

Start with the plain language description of each term. Follow the math when you need it.

How it all fits together

The Framework names the terrain. The instruments measure behavior on it. The schema defines how measurements get recorded. The protocols govern how they're taken — CISP is the governance layer that sits above every active instrument run, enforcing isolation, sequencing, and the human-judgment boundary.

Below the protocols, the automation layer handles transcription: parsing raw model output, computing what can be computed, and leaving blank what requires a Technician's call. Below that is the data the instruments produce over time — the actual record Atlas is building.

The geometry sits at the end of the chain. PyHessian doesn't measure behavior; it measures the mathematical terrain the Framework describes. When there's enough data, the Hessian eigenvalue analysis will either confirm the Framework's terrain claims or force a revision. Working hypotheses stay hypotheses until the math has something to argue with.

Architectural Structure of the Framework

How the layers relate to each other — resolved through adversarial review by GPT-4, GPT-5.2, Perplexity Sonar, DeepSeek V3, DeepSeek V3.2, Mistral Large, Mistral Large-3, Llama, Llama3.3 70B, Grok, Skywork, and Nemotron-3-Super-120B.

The Heisenberg Resolution
The terrain/navigator distinction is real and irreducible — not because they are independent coordinate systems, but because they are conjugate descriptions. Formal status clarified (Skywork, April 2026): The formal Heisenberg uncertainty principle requires non-commuting operators in a Hilbert space. There is no equivalent non-commutativity in the loss landscape. What IS real: stationarity and movement are incompatible measurement conditions — a methodological constraint, not a mathematical certainty principle. The terrain/navigator conjugacy is a structural analogy with genuine methodological content, not a formal principle derivable from landscape geometry.
Terrain

Position in parameter space. L(θ) and its derivatives. Readable only when model is stationary.

Navigator

Momentum through parameter space. Observable only during movement. Viscosity, memory, perplexity.

Skywork Qualifier Collapse Hierarchy
The seven navigator qualifiers do not form seven independent variables. Three genuinely independent variables: density, coupling, elasticity. Four derived readouts: perplexity (= 2^cross-entropy loss), probability (per-token aggregate of perplexity), viscosity (eigenvalue spectrum determined by coupling), memory (causal history of viscosity, not independently measurable in frozen models).
Density → Perplexity → Probability (algebraic chain) Coupling → Viscosity (causal: b appears in eigenvalue calculation) λ_{1,2} = (a+d)/2 ± √((a−d)²/4 + b²)
Skywork Coverage Gaps → Global Geometry (v12)
Three landscape behaviors none of the seven qualifiers can capture — promoted to first-class terms in the Global Geometry tab (v12):
  • Basin connectivity — B(θA,θB) = min_φ max_t L(φ(t)) − max(L(θA),L(θB)) — not described by any qualifier. All seven are locally defined at a point.
  • Symmetry orbits — permutation symmetry group |G| ≥ ∏ nₗ! · 2^nₗ — all seven qualifiers constant across this orbit. Ablation drift vector particularly damaged.
  • Phase transitions — grokking demonstrates catastrophic behavioral change while loss surface remains smooth. Framework describes weight-space geometry, not representational geometry.
Archaeological Claim — Mechanism Added (Song et al.)
Song et al. (2024): inferring neural activity before weight update produces learning properties closer to biological plasticity. Weights encode predictions not inputs — shaped by inference that completed before each update. A frozen model is therefore a snapshot of a generative model's prediction state at the moment of capture. High-perplexity regions are places where prediction error accumulated faster than inference could resolve it before the update discharged it. The retained potential difference framing now has a biological learning mechanism.
The Suspension Bridge Frame
The generative goal is a learning model that adapts toward stable tension — like a suspension bridge spanning idiosyncrasy and entropy. A suspension bridge doesn't eliminate tension between its anchor points. It distributes it. A model built for stable tension wouldn't anneal idiosyncrasy out toward consensus. The archaeological signal would be legible because the structure was built to preserve it.
GPT-2 Small First Pass Results
ONE UN-REPLICATED FIRST-PASS RUN — directional only.
Perplexity by Domain
Technical docs19.1
Vernacular dialect33.2
Reddit tech40.1
Non-Western cultural46.4
Literary prose49.2
Poetry58.6
Non-English text83.6
Academic abstract102.5
Inter-Head Coupling by Layer
Layer 0
0.610
Layer 1
0.703
Layer 2
0.724
Layer 3
0.747
Layer 4
0.675
Layer 5
0.903
Layer 6
0.880
Layer 7
0.906
Layer 8
0.911
Layer 9
0.936
Layer 10
0.945
Layer 11
0.789