1) Cross-Model Variance (CMV)
Measures semantic variance between different models interpreting the same input.
Highlights meaningful divergence beyond stylistic or formatting differences.
Used for: identifying disagreement clusters and calibration gaps.
2) Resolution Elasticity (RE)
Measures how meaning stretches or compresses as semantic resolution changes.
Ensures models remain comparable across different levels of detail.
Used for: preventing false disagreement caused by mismatched granularity.
3) Stability Under Load (SUL)
Measures how meaning behaves under pressure:
shortened context, noisy inputs, conflicting signals, or output compression.
Used for: stress-testing coherence in real-world conditions.
4) Temporal Coherence (TC)
Measures whether definitions and interpretations remain coherent across time,
including rephrasing, updates, retraining cycles, and version drift.
Used for: detecting gradual semantic decay or instability.
5) Semantic Drift (SD)
Measures directional change in meaning over time —
distinguishing natural evolution from unintentional drift.
Used for: monitoring continuity and controlled evolution of meaning.
6) Opt-Verified Integrity (OVI)
Measures whether outputs remain traceable to verified sources,
CDI definitions, or registered ChiR-IPP artifacts — without inflating certainty.
Used for: provenance assurance and auditability.
7) Provenance Depth (PD)
Measures how deeply claims are grounded in explicit sources,
definitions, and recorded lineage — not just surface citation.
Used for: distinguishing grounded insight from narrative gloss.
8) Referential Breadth (RB)
Measures the diversity and relevance of reference frames invoked —
mathematical, empirical, institutional, or domain-specific.
Used for: evaluating contextual richness without overreach.