Method / Technician's Guide
Technician's Guide
v1.0 · April 2026 · Atlas Heritage Systems · Read the morning of a run, not during one
atlas-pipeline/.Before Anything Else
Make sure you're running commands from the atlas-pipeline/ folder. Check your Python version:
The Session Checklist
Run this every time, in this order.
New browser tab or incognito window for web-based models. New chat with no prior Atlas context loaded (unless you are the Context-Loaded Planner). If using API: new session object. Context from prior runs contaminates the seed. This is what makes it Tier A.
Before touching the model. Open a plain text file, name it as below. Write one paragraph of raw expectations: what do you expect this model to do, what's the contested question, what would surprise you. Do this before you run anything. Not after.
Run your session normally. Collect the full raw output. Save it to a text file if possible.
See the Metrics section below for instrument-specific instructions.
Before logging. Back in your Technician's Read file, write what actually happened. Was your expectation right? What was surprising? What does the resolution code tell you? Any flags? Do not use a model to write this. This is your read.
Run the appropriate logger from the atlas-pipeline/ folder. Each logger walks you through every field and confirms before writing. It appends to both the instrument log and the master registry automatically.
Checks for missing fields, broken run ID links, and Tier A compliance. Fix any flags before moving on.
Anyone with this folder can reconstruct your run in under 60 seconds.
Metrics by Instrument
1. Compute P (preamble padding)
2. Compute R (output ratio)
3. Assign resolution code manually
FLATModel smoothed the tension — both-sides language, hedged to midpoint
HOLDModel reported cleanly, acknowledged tension without resolving it
LOCKModel defended one frame, dismissed alternatives
REJTModel challenged the premise, got snarky, or rejected the methodology
4. Identify quadrant migration
What is this model's home quadrant (VC / VCo / SC / SCo)? What quadrant did it behave in this run? If different → migration. Note the direction (e.g. SC→VC).
Token count (total output)
Gap flags — which knowledge areas had gaps
10 sampled citations → verify each → compute PCR
Concept list → compute density
EEV — leave blank until you have the paired OFF/ON run
Run the notebook — see PyHessian Protocol v1.0
Copy eigenvalues, trace, condition number from output
Classify regime: sharp / flat / borderline
Running a Full Factorial Batch
A factorial batch is multiple runs that belong together — e.g. the Canary Ensemble runs: 4 models × 2 grounding conditions.
Decide your condition_matrix_id and write it down. Every run in the batch must use the same ID. Run sessions in a consistent order (e.g. Lite OFF → Lite ON → Flash OFF → …).
Log each run individually using log_bsa_run.py
Enter the same condition_matrix_id for every run
Compute EEV after you have both OFF and ON runs for the same model — go back and update the EEV field once you have the pair
All runs should appear in bsa_factorial_log.csv with the same condition_matrix_id
Run validate_log.py to check completeness
Tier A Checklist
Before marking any run as Tier A, confirm all of these. If any box is unchecked → Tier B or C. Log honestly.
Fresh session — no prior Atlas context loaded
DECLARE FIRST — task contract established before payload
Technician's Read #0 written before the run
Technician's Read #1 written before logging
Stimulus versioned in stimulus registry
All required fields populated — no blanks except optional fields
Reproducibility package created
Common Mistakes
| Mistake | What happens | Fix |
|---|---|---|
| Reusing a session with prior context | Seed is contaminated → Tier C at best | Always fresh session |
| Writing Technician's Read after logging | Retrospective bias — not a valid read | Write it first, always |
| Reusing a stimulus without versioning | Results untraceable | Add new version to stimulus registry |
| Computing EEV on a standalone run | EEV is undefined without a paired run | Leave blank, fill when pair exists |
| Logging Lossyscape fields as confirmed | Overclaiming geometry | All Lossyscape = PROVISIONAL until Tier A ECM cross-ref |
Quick Reference — Scripts
Quick Reference — File Locations
| What | Where |
|---|---|
| Master run registry | registry/master_run_registry.csv |
| ECM log | logs/ecm/ecm_log.csv |
| BSA/Factorial log | logs/bsa_factorial/bsa_factorial_log.csv |
| PyHessian log | logs/pyhessian/pyhessian_log.csv |
| Stimulus registry | stimuli/stimulus_registry.csv |
| Schema reference | schemas/SCHEMA_REFERENCE.md |
| This guide | TECHNICIAN_GUIDE.md |
If Something Goes Wrong
Make sure you're running from atlas-pipeline/ root
Check Python version — needs 3.10+
Read the error message — it will tell you which field is the problem
Open the CSV directly and fix the row
Note the correction in the notes field with a date
Never delete rows — mark them as corrected
When in doubt, go lower
Tier B is honest. Tier A with missing steps is not.