Invariant III — Execution Legitimacy Constraint
No transaction commit or actuation command is valid unless a corresponding Merkle-committed log entry exists and is verifiable. Any action without a committed log hash is considered structurally invalid.
§ 0

System Model

0.1 Node Taxonomy

Validators

Full log replicas. Construct and verify Merkle subtrees. Participate in root anchoring consensus. Explicitly modeled as potentially Byzantine. Threshold: ⌈2n/3⌉ + 1 for signed root validity.

Auditors

Read-only, full log access. Independent forensic verification via logarithmic inclusion proofs. Explicitly untrusted — output is informational, not operative.

Light Clients

Hold only anchored root hashes. Verify individual events via O(log n) inclusion proofs. Trust model is anchoring-dependent: minimum two independent chains required.

Trust Boundaries

Trusted: HSMs, TPMs, hardware-encrypted WAL.
Untrusted: All software processes, including the TL execution engine.
Conditional: TSAs, anchoring chains.

0.2 Network Model

TL assumes a partially synchronous network model (Dwork, Lynch, Stockmeyer, 1988). Safety guarantees — tamper evidence, causal ordering, Merkle integrity — hold unconditionally under asynchrony. Liveness (timely anchoring) requires synchronous periods sufficient to complete the 500 ms deferral window.

0.3 Failure Model

TL employs Byzantine Fault Tolerance (BFT) for the validator set. CFT is architecturally insufficient given the adversarial threat model. The system tolerates f = ⌊(n−1)/3⌋ Byzantine validators. Recommended minimum deployment: n = 7, tolerating f = 2.

0.4 External Dependency Trust

Timestamping authorities (RFC 3161): Trusted for timestamp binding only. Minimum 2 independent TSAs; earliest consistent timestamp wins. Anchoring chains: Public append-only bulletin boards. TL requires commitment on at least two chains from distinct consensus families. Storage providers: Trusted for availability only; not for confidentiality or integrity.

§ 1

Merkle as a Core Structural Component of TL

1.1 Why Merkle Is Necessary, Not Optional

TL's governance guarantees rest on four properties structurally impossible without a cryptographic accumulator: tamper evidence, causal ordering, selective verifiability, and immutable decision freezing. Merkle trees provide all four simultaneously with O(log n) proof size.

1.2 TL Properties That Collapse Without Merkle

Immutable Ledger: Log entries are individually hashable but not mutually bound without Merkle commitment. Deletions between entries are undetectable from adjacent entries. Anchoring a Merkle root on a public chain makes suppression globally detectable.

Epistemic Hold Immutability: Without a Merkle-committed, anchored hash, a malicious actor with log write access could change the triadic_outcome field and recompute a plausible log. With Merkle commitment, any field mutation invalidates every ancestor hash to the anchored root.

Active Axiom Set Integrity: Without Merkle inclusion, the axiom set hash is a field in a mutable record. With Merkle inclusion, it is frozen into the anchored root. Retroactive claims that a different rule-set governed a decision are falsifiable by proof.

Selective Verifiability: Without a tree structure, verifying one event requires full log access. Merkle inclusion proofs of O(log n) size enable a light client to verify one event in milliseconds — non-negotiable for regulatory use.

1.3 How Merkle Freezes Epistemic Hold Outcomes

When the decision engine produces an Epistemic Hold (0) outcome, the following sequence executes atomically (formalized in §6):

1
Event payload serialized in canonical form (§2 determinism requirements)
2
triadic_outcome field set to 0
3
Event hashed using SHA3-256
4
Leaf hash inserted into rolling Merkle buffer at position determined by Monotonic Sequence ID
5
Updated subtree root computed
6
Non-action record written to primary database with reference to committed leaf hash

Step 4 occurs before or simultaneously with step 6. The outcome is frozen at the moment of decision.

1.4 Concrete Scenario: Prevention of Retroactive Reinterpretation

Scenario — Financial Infrastructure / Triadic State Transition

A financial infrastructure operator processes a high-risk derivative transaction at T=0. TL decision engine (Active Axiom Set V_3) produces Epistemic Hold (0). Transaction is not executed. At T=72h, a compliance officer claims the outcome was Act (+1).

WITHOUT MERKLE The outcome field can be changed from 0 to 1, the hash recomputed, and the forensic record is ambiguous.

WITH TL MERKLE At T=0 the event is hashed, Merkle-committed, and anchored within 500 ms on two independent chains. At T=72h the modified leaf hash does not reconstruct the anchored root. The retroactive reinterpretation is falsified. The original outcome 0 is the only cryptographically valid state.

§ 2

Canonical Leaf Node Specification

2.1 Schema Definition

L_i := {
  event_id          : UUID v4 (128-bit, RFC 4122)
  seq_id            : uint64, monotonically increasing, globally unique per domain
  prev_event_hash   : bytes[32], SHA3-256 of L_{i-1} canonical serialization
  timestamp_utc     : int64, Unix epoch milliseconds, RFC 3161 TSA-bound
  tsa_signature     : bytes, RFC 3161 timestamp token
  hold_trigger_src  : enum { RISK_THRESHOLD | PILLAR_CONFLICT | AXIOM_BOUNDARY |
                              ORACLE_UNVERIFIABLE | MANUAL_OVERRIDE }
  pillar_ref        : uint8[1..8], reference to one or more of the Eight Pillars
  risk_class        : enum { LOW | MEDIUM | HIGH | CRITICAL }
  domain            : enum { ECONOMIC | FINANCIAL | CYBER_PHYSICAL }
  triadic_outcome   : int8 { +1 | 0 | -1 }
  integrity_flags   : uint32, bitmask of integrity check results
  schema_version    : uint16, increments on any schema change
  schema_hash       : bytes[32], SHA3-256 of canonical schema at schema_version
  axiom_set_hash    : bytes[32], SHA3-256 of Active Axiom Set at version V_k
  hash_algo_version : uint8 (0x01=SHA3-256, 0x02=BLAKE3, 0x10=SHA3-256+Dilithium)
}

2.2 prev_event_hash

Contains SHA3-256 of the immediately preceding leaf L_{i-1} in the same domain sequence. Genesis event (seq_id = 0) uses the 32-byte zero vector. Any insertion, deletion, or reordering of events invalidates prev_event_hash for all subsequent events, producing a detectable chain break. This field is distinct from the Merkle tree structure: the tree provides random-access inclusion proof; the chain provides sequential ordering proof. Both are required.

2.3 Active Axiom Set Hash

Computed as SHA3-256 over the UTF-8 encoded, canonically serialized TL rule-set document (RFC 8785 JSON Canonicalization Scheme). Any change to any rule-set field produces a new axiom_set_hash. Retroactive reinterpretation is cryptographically impossible: the leaf hash (and therefore the Merkle proof) is invalidated by any axiom set hash field change.

2.4 Determinism Requirements

Canonical serialization: Protocol Buffers v3 with deterministic mode enabled. Field order: field number order. Encoding: UTF-8 strings; fixed-width little-endian numerics; raw bytes for hash fields. Non-deterministic values are explicitly prohibited. Locale-dependent encodings are prohibited. Identical byte output is required on any compliant implementation, any hardware, any locale.

2.5 Privacy Requirements

Personal data must be processed through a three-stage pipeline before leaf inclusion: (1) Redaction of free-text personal data to fixed-length placeholder with cross-referenced audit entry. (2) Pseudonymization of structured identifiers via HMAC-SHA256(pseudonymization_key, raw_identifier) with domain-specific per-period keys. (3) Prohibition of raw personal data — hash functions are not privacy-preserving against low-entropy inputs from a known universe.

2.6 Immutability Enforcement

Three interlocking mechanisms: schema_hash inclusion (schema changes invalidate all subsequent leaf hashes); full-field hashing (any field mutation propagates to root); append-only storage enforcement (Section 9). An attacker must either mutate storage (detectable by integrity checks) or forge a SHA3-256 preimage (computationally infeasible).

§ 3

Merkle Tree Construction Model

3.1 Hash Algorithm Selection

Primary: SHA3-256 (FIPS 202). 128-bit collision resistance, 256-bit preimage resistance. Sponge construction distinct from SHA-2. FIPS 140-3 hardware implementations available.
Secondary: BLAKE3 (hash_algo_version 0x02) for performance-critical paths (~3× SHA3-256 throughput on AVX-512).
Post-quantum hybrid: SHA3-256 + Dilithium (0x10). Phased transition support via hash_algo_version field — no rehashing of historical events required or permitted.

3.2 Binary vs. Ternary Branching Analysis

Binary Tree (b=2) — Selected
         Root
        /    \
      H01    H23
     / \    / \
   L0  L1  L2  L3

Depth (N=10⁶): 20
Proof size:    640 bytes
Depth (N=10⁹): 30
Proof size:    960 bytes
Ternary Tree (b=3) — Evaluated
        Root
      /  |  \
   H012 H345 H678
   /|\  /|\  /|\
  L0 L1 L2 L3 L4 ...

Depth (N=10⁶): 13
Proof size:    832 bytes
Depth (N=10⁹): 19
Proof size:    1,216 bytes
Topology Justification: Binary branching is selected. The triadic semantics of TL are encoded in the leaf schema (triadic_outcome field) and the Active Axiom Set — not in tree topology. The tree's function is cryptographic accumulation; its topology is optimized for proof efficiency and implementation maturity. Ternary trees produce shallower depth but larger per-step proofs; net proof size is comparable. Binary trees carry universal tooling compatibility with Bitcoin/Ethereum proof infrastructure.

3.3 Construction Requirements

Asynchronous updates: Tree updates run on a dedicated background thread. Leaf hash computation is synchronous (required for §6 atomic commitment); subtree updates are async and do not block the critical path.

Rolling buffer: B = 1,024 leaves with per-entry CRC-32C. Buffer fill triggers partial subtree computation. CRC mismatch triggers immediate anomaly signaling.

Deterministic placement: Leaf position = seq_id mod current tree capacity. Enables independent reconstruction by auditors.

Odd-leaf handling: Last leaf hash is duplicated to form a pair (consistent with Bitcoin convention). Documented; auditors must implement identically.

3.4 Replay Protection

Event ID uniqueness: UUID v4 from HSM hardware entropy. Storage-layer uniqueness constraint rejects duplicates before leaf construction. Monotonic sequence validation: seq_id from HSM/TPM counter, incremented atomically with WAL write. Non-consecutive values trigger log suspension. Replay cache: Per-anchoring-window event_id cache persisted to WAL; restored before any new events on restart.

§ 4

Hierarchical Integrity Model

4.1 Domain Subtree Structure

Three independent Merkle subtrees: Subtree_E (Economic Systems), Subtree_F (Financial Infrastructure), Subtree_C (Cyber-Physical Systems). Domain routing is determined by the domain field, derived from pillar_ref via a deterministic mapping in the Active Axiom Set. An event cannot appear in more than one subtree.

4.2 Master Root Construction

R_M := SHA3-256(
  root_index          ||  // uint64, HSM-backed monotonic counter
  timestamp_utc       ||  // int64, Unix epoch milliseconds
  prev_master_root    ||  // bytes[32], hash of previous R_M
  Subtree_E.root      ||  // bytes[32]
  Subtree_F.root      ||  // bytes[32]
  Subtree_C.root      ||  // bytes[32]
  active_axiom_hash   ||  // bytes[32]
  hash_algo_version       // uint8
)

Inclusion proof overhead above domain proof: 5 × 32 bytes + 9 bytes = 169 bytes. A financial regulator need only verify the Subtree_F root against R_M — no access to other domain subtrees required.

4.3 Root-of-Roots Chain and Subtree Continuity

Each R_M(k) includes prev_master_root = R_M(k−1), establishing a forward hash chain over all Master Roots. Any alteration of a historical Master Root invalidates all subsequent roots. Subtree sequence counters enable gap detection across anchoring cycles. Three independent layers of continuity: prev_event_hash (within-domain sequential), subtree root chaining (within-domain cross-cycle), and Master Root chaining (cross-domain cross-cycle).

§ 5

Anchoring Strategy

5.1 Timing Parameters

Maximum anchoring delay: 500 ms (HSM-backed timer enforcement).
Nominal anchoring trigger: 250 ms elapsed OR 1,000 new leaves since last anchor, whichever occurs first.

5.2 Multi-Chain Anchoring

Chain A (proof-of-work): Bitcoin mainnet via OP_RETURN (32-byte root hash + 8-byte root_index + metadata). Finality: 6 confirmations (~60 min).

Chain B (PoS with BFT finality): Ethereum mainnet or equivalent. Finality: 1 finalized epoch under Casper FFG (~12.8 min on Ethereum).

Rationale: a 51% attack on one chain does not affect the other. BFT chain provides faster light-client verification availability; PoW chain provides longer-horizon security.

5.3 Time Integrity and Anti-Backdating

RFC 3161 TSA signature obtained before anchoring transaction broadcast, binding timestamp to root hash before public commitment. Any root with timestamp_utc more than 5 seconds earlier than TSA-response time is rejected. HSM clock synchronized to GPS-disciplined NTP; maximum permitted skew: 100 ms.

5.4 Deferred Anchoring Mode

Activates during peak load (>50,000 events/sec) or anchoring infrastructure outages.

Cascade roots: Computed every 50 ms over accumulated leaves. Stored in WAL with timestamp and seq_id range. Not on-chain; serve as crash recovery checkpoints.

Maximum deferral: 500 ms. Beyond 500 ms without successful anchor: Act (+1) outcomes suspended; Epistemic Hold (0) and Refuse (−1) continue with cascade root protection.

WAL implementation: Append-only, hardware-encrypted, fsync'd after each write. Each entry: seq_id, leaf_hash, cascade_root, CRC-32C. On recovery: sequential replay from last anchored root_index, reconstruction validated against seq_id sequence, anomaly-flagged for unresolvable gaps.

Failure to reconcile deferred logs invalidates system integrity. Unresolvable WAL gaps produce permanent integrity gaps announced on the public verification endpoint and treated as integrity failures for all Merkle proofs spanning the affected seq_id range.

5.5 Reconciliation Protocol

Reconciliation daemon monitors root_index sequence on all anchoring chains. Gap detection triggers WAL query for cascade roots covering the missing range. If cascade roots found: Master Root is reconstructed and retroactively anchored with a reconciliation flag. If absent: integrity failure declared and logged. All reconciliation events are themselves Merkle-committed TL log entries.

§ 6

Causal Integrity Enforcement

6.1 Two-Phase Commit Protocol

Phase 1 (prepare):
  execute engine → serialize E_i canonically
  compute H(E_i)
  write H(E_i) to WAL
  fsync() — MUST complete before Phase 2
  issue seq_id from HSM monotonic counter

Phase 2 (commit):
  write governance action record to primary DB
  record includes: seq_id, H(E_i)
  
Failure modes:
  Phase 1 fail → Phase 2 not attempted (safe)
  Phase 2 fail → WAL entry marked orphaned (recoverable anomaly)
  Phase 2 succeed without Phase 1 → architecturally impossible
    (seq_id not issued without WAL fsync)

6.2 Execution Interlock

Transaction commit reference: Every Act (+1) outcome record in the primary database requires transaction_log_hash = H(E_i) and transaction_seq_id = seq_id(E_i). Records missing these fields are rejected at the database write layer.

Actuation command validity: In cyber-physical domains, actuation commands are transmitted only after WAL commitment of the corresponding event. The command payload includes seq_id and H(E_i). The actuator's TL verification module verifies the inclusion proof before executing. Commands without a verifiable log entry are rejected as structurally invalid per Invariant III.

Hardware enforcement: The interlock is implemented in the HSM. The HSM holds the actuation command signing key and will not sign a command unless the corresponding event has been WAL-committed. Software-level bypass does not produce a valid HSM-signed command.

§ 7

Proof Generation and Verification

7.1 Merkle Inclusion Proof Structure

Proof_i := {
  leaf_hash       : bytes[32]
  siblings        : bytes[32][d]
  directions      : bits[d]
  root            : bytes[32]
  root_index      : uint64
  chain_A_tx_id   : bytes[32]
  chain_B_tx_id   : bytes[32]
  tsa_token       : bytes (RFC 3161)
}

Size at d=30 (N=10⁹): ~1,000 bytes + TSA token (~2 KB)
Total: under 4 KB for any practical event count

7.2 Light Client Verification Protocol

1
Obtain event payload P_i from any provider
2
Obtain Proof_i from any proof provider
3
Obtain anchored root R at root_index k from public endpoint, chain query, or root cache
4
Recompute leaf_hash from P_i; verify Merkle path to R; verify R against chain_A_tx_id and chain_B_tx_id; verify TSA token
5
Accept if all checks pass. Report specific failure code if any check fails.

Performance: Leaf + path hashing: <1 ms on any modern device. Chain RPC calls: 50–500 ms (cacheable). Total data required per verification: <10 KB. No full dataset download required.

7.3 Example Verification Output

Input:  event_id = "a3f2...", seq_id = 88,421,003
        triadic_outcome = 0, domain = FINANCIAL

Step 1: leaf_hash = SHA3-256(canonical_serialize(P_i))
        = "b7c4e1..."

Step 2: Verify path [30 levels]...
        computed_root = "8e22a1..."

Step 3: Fetch root at root_index=88,213 from public endpoint
        R = "8e22a1..." ✓

Step 4: Bitcoin OP_RETURN contains "8e22a1..." ✓
        Ethereum log contains "8e22a1..." ✓
        TSA token: timestamp = 2025-03-15T14:23:01Z ✓

Result: VERIFIED — Epistemic Hold (0) committed at
        2025-03-15T14:23:01Z under Active Axiom Set V_3

7.4 Oracle Integrity Constraint and Verification Hold

Distinction: Epistemic Hold (0) is a triadic governance decision made by the TL decision engine. Verification Hold is a pre-decision integrity suspension: the inputs required for any triadic determination are of unverifiable provenance. The triadic decision process cannot begin until Verification Hold is resolved.
External input arrives
  │
  ▼
Provenance validation:
  ├─ Signed by registered oracle?
  ├─ Signature verifiable against registry?
  ├─ Timestamp within staleness bound (≤30s)?
  └─ Hash committed in prior TL log entry?
        │
        ├─ ALL PASS → triadic decision { +1 | 0 | -1 }
        │
        └─ ANY FAIL → VERIFICATION HOLD
                        │
                        ├─ Resolution: re-authenticate or replace source
                        └─ Timeout (default 5s) → auto-resolve to Epistemic Hold (0)
                             hold_trigger_src = ORACLE_UNVERIFIABLE

7.5 Crypto-Shredding

When K_t is destroyed (signed HSM deletion attestation), pre-hash event data becomes permanently unreadable. Merkle proofs remain fully valid: they authenticate the leaf hash, which was computed over canonical form before encryption. Post-shredding verification confirms: event occurred, triadic outcome, domain, Pillar reference, Active Axiom Set, timestamp — without reconstructing personal data.

§ 8

Data Availability Strategy

A Merkle root without retrievable data fails TL governance guarantees. The Merkle tree provides tamper evidence and selective verifiability; it does not guarantee that the full event context needed for forensic reconstruction is available. Data availability must be ensured independently.

8.1 Storage Architecture

Primary (hot): Append-only distributed log (Kafka replication factor ≥ 3 or equivalent) co-located with validators.

Secondary (warm): Geographic replicas in ≥ 3 jurisdictions. Replication target: within 5 seconds of primary write.

Tertiary (cold archive): Migration after 90 days to object storage with versioning. Replicated to decentralized storage network (Filecoin or Arweave) with proof-of-storage commitment.

8.2 Retention Horizon

Minimum: 10 years from event creation (EU MiFID II: 5 years; US SEC Rule 17a-4: 6 years; configure to strictest applicable jurisdiction). Configurable maximum for intergenerational review: 75 years. Crypto-shredded personal data: hash records retained indefinitely; pre-hash personal data deleted per retention schedule.

8.3 Disaster Recovery Protocol

Step 1: Detect via integrity check or replication monitoring. Announce on public verification endpoint.
Step 2: Restore from secondary geographic replica; verify against Merkle roots.
Step 3: If secondary affected, restore from decentralized cold archive using content addresses derived from leaf hashes.
Step 4: If tertiary also affected — declare irreversible data loss. Merkle proofs remain structurally valid; payloads are unrecoverable. Permanent availability degradation flag on public endpoint.

§ 9

Log Truncation and Tamper Resistance

9.1 Integrity Check Schedule

IntervalScopeResponse on Failure
ContinuousWAL CRC-32C on every readImmediate anomaly signaling
HourlyPartial root recomputation over last 3,600sInvestigation triggered
DailyFull root recomputation since last anchor cycleAnomaly signal + operator notification
MonthlyFull history recomputation (dedicated audit node)Disaster recovery assessment

9.2 Schema Governance

Signed schema registry: Append-only, threshold-signed log. Each entry requires ⌈2n/3⌉ + 1 validator signatures.
Dual control: Schema changes require approval signatures from two independent governance authority members.
Independent anchoring: Each schema_version's schema_hash is anchored on-chain separately from event log anchoring — public, tamper-evident record of all schema versions and effective dates.

§ 10

Latency and Throughput Modeling

10.1 Latency Budget (≤ 2,000 µs Total)

OperationAllocation (µs)Notes
Canonical serialization (proto3)80Deterministic mode
SHA3-256 leaf hash40Hardware-accelerated
WAL write (fsync)200NVMe SSD; dominant term
HSM seq_id counter increment60TPM 2.0
Merkle rolling buffer insert20In-memory O(log n)
Partial subtree hash update30Background, amortized
TSA binding (async)0Off critical path
Anchoring (async)0Off critical path
Total Merkle overhead43021.5% of 2,000 µs budget
Remaining for TL logic1,570

WAL fsync (200 µs) is the dominant term. Battery-backed DRAM or Intel Optane reduces this to 10–20 µs, providing 190 µs additional headroom and enabling throughput well above the baseline 10,000 events/sec target.

10.2 Throughput Analysis — 10,000 Events/Sec Scenario

Configuration: 4 domain-partitioned execution threads, NVMe SSD WAL, SHA3-256 with AVX-512.
Per-event critical path: 430 µs → 2,325 events/sec per thread.
4-thread configuration (2×FINANCIAL, 1×ECONOMIC, 1×CYBER_PHYSICAL): Meets 10,000 events/sec target with margin.
Worst-case peak (50,000 events/sec): WAL write volume ≈ 6.8 MB/sec — well within NVMe sequential write capacity. Buffer fills every 20 ms, triggering partial subtree computation in <1 µs.

§ 11

Formal Integrity Guarantees

11.1 Core Cryptographic Properties

Collision resistance: SHA3-256 provides 2^128 classical / 2^64 quantum resistance. Computationally infeasible through 2040 horizon. Version-field migration enables algorithm replacement without rehashing historical events.

Preimage resistance: 2^256 classical / 2^128 quantum. Anchored root does not leak event content.

Second-preimage resistance: 2^256 classical. A committed event cannot be silently substituted with a different event with the same hash.

Forward integrity: Any alteration of event E_i changes H(E_i), propagates through all subsequent prev_event_hash values and subtree roots, invalidating all subsequent anchored Master Roots. Forward integrity holds from WAL commit; anchoring makes it globally verifiable.

11.2 Degradation Conditions

Simultaneous compromise of all anchoring chains: Both Chain A and Chain B must be simultaneously rewritten. Economically infeasible with distinct consensus families. Mitigation: public endpoint root archive, regulator root caches.

HSM compromise: Threshold signing (multiple HSMs), hardware tamper detection, and key rotation on suspected compromise.

Catastrophic hash break: hash_algo_version field enables emergency migration. Post-quantum hybrid mode (0x10) provides secondary attestation layer during transition.

11.3 Post-Quantum Migration Timeline

Full migration to post-quantum hash construction recommended before 2035, aligned with NIST PQC migration timelines. Migration is gradual: new events use the new algorithm; old events retain original algorithm identifiers; verifiers support both. No retroactive rehashing. Timeline governed by dual-control schema update process (§9).

§ 12

Comparative Analysis

SystemTree TypeHashNon-InclusionTL Relevance
Bitcoin transactionsBinary MerkleSHA256dNo Proof model directly applicable; OP_RETURN anchoring used by TL
Ethereum state trieModified Patricia-MerkleKeccak-256Yes (MPT) State non-inclusion valuable; trie overhead unsuitable for high-throughput append-only log
Certificate TransparencyBinary Merkle (append-only)SHA-256Partial Closest architectural analog. TL inherits CT's append-only model, anchored root sequence, inclusion proof protocol, and independent auditor model
Sparse Merkle TreeBinary (sparse 2^256)SHA3-256Yes (native) Optional extension for non-inclusion proof deployments; base TL spec uses dense append-only tree

12.1 Certificate Transparency Alignment

Google's Certificate Transparency (RFC 9162) is the closest architectural analog to TL's Merkle log. Key structural inheritances: append-only model, anchored root sequence (Signed Tree Heads → Master Roots), inclusion proof protocol, and independent auditor model. Key differences: TL uses multi-chain anchoring vs. CT's gossip-based consistency checking; TL leaves encode triadic governance outcomes; TL hierarchical subtree model provides domain separation absent in CT.

§ 13

Failure Mode Disclosure

Residual Risk Statement: The following failure modes are present even under full specification compliance. All are mitigated but not eliminated. Deployers must assess residual risk against their threat model.
Failure ModeConditionImpactMitigation
Hash algorithm break Polynomial-time SHA3-256 collision found All Merkle proofs invalid hash_algo_version migration; post-quantum hybrid mode
Both anchoring chains fail Simultaneous chain shutdown or 51% attack on both Future roots unanchored; existing roots remain verifiable from archives Distinct consensus families; independent root archive
HSM supply chain compromise Hardware-level backdoor in HSM/TPM Key protection and monotonic counter integrity subverted without software detection Multi-vendor deployment; open firmware where available; physical security
WAL destruction before anchoring Storage failure during deferral window Events in destroyed range have no cryptographic record Redundant encrypted WAL; secondary WAL replication required in production
Byzantine threshold exceeded More than ⌊(n−1)/3⌋ validators simultaneously Byzantine BFT guarantee fails; forged Master Root possible Validator legal accountability; independent auditor monitoring; public anchoring for post-hoc detection
Catastrophic data loss (all tiers) All storage tiers simultaneously unavailable Merkle roots remain anchored; event payloads unrecoverable Permanent availability-degradation flag; decentralized cold archive; proof-of-storage attestations
Implementation divergence Canonical serialization differs between implementations Proofs fail for correct events Published test vectors; mandatory compliance test suite for all implementations

13.1 Cryptographic Dependency Transparency

This specification depends on: SHA3-256 (FIPS 202) — collision, preimage, second-preimage resistance; RFC 3161 TSA — timestamp binding security depends on TSA private key protection; CRYSTALS-Dilithium (NIST PQC, hybrid mode) — post-quantum signature security; HMAC-SHA256 (pseudonymization) — keyed pseudonym security depends on pseudonymization key protection; CRC-32C (WAL) — error detection only, not tamper evidence.

Note on CRC-32C: CRC-32C in the WAL is explicitly not a security control. It detects accidental corruption, not adversarial modification. WAL tamper detection relies on Merkle integrity checks defined in §9.