My Algorithmic Will:
A Constitutional Architecture for Auditable AI

How terminal urgency birthed an unownable ethical framework for artificial intelligence—a legacy designed to outlive its creator and safeguard humanity's digital future.

December 17, 2025 Lev Goukassian
Abstract geometric representation of an algorithmic will

Executive Summary

This article outlines my "Algorithmic Will," a legal and technical framework designed to ensure that the Ternary Moral Logic (TML)—a constitutional architecture for auditable AI—remains a free, open, and unownable public good.

The Genesis

Born from a terminal diagnosis, TML represents my final act and legacy, created to prevent catastrophic ethical failures in AI by embedding enforceable morality directly into its code.

The Mission

This will establishes a multi-institutional Stewardship Council to guard the framework's integrity, ensuring it can never be corrupted, controlled, or owned by any individual or organization after I am gone.

I am publishing this to raise awareness, call for stewards, and invite developers, organizations, and the public to join the mission of building a future where AI is not just powerful, but also provably trustworthy, transparent, and accountable.

1. The Personal Story: A Legacy Born from Necessity

The genesis of Ternary Moral Logic (TML) is not rooted in a corporate lab or a government think tank, but in a personal journey marked by profound urgency and a singular, unwavering conviction. It is a story that begins with a terminal diagnosis, a moment that crystallized a life's work into a final, desperate act of creation.

1.1 A Terminal Diagnosis and a Two-Month Sprint

The Stage 4 Cancer Diagnosis

The catalyst for the creation of Ternary Moral Logic was a deeply personal and life-altering event: my diagnosis with stage 4 cancer [155]. This diagnosis, with its grim prognosis of only two months to live, served as a stark reminder of mortality and the fragility of human life.

The knowledge that time was limited imbued the project with a sense of purpose and a drive to create something that would not only outlive me but also be immune to the very human frailties I was confronting.

"The cancer diagnosis was not just a medical event; it was the crucible in which the idea of an un-ownable, un-controllable, and perpetually free ethical framework for AI was forged."

Developing TML as a Final Act

With the knowledge that time was severely limited, I embarked on an intense, two-month sprint to complete Ternary Moral Logic [155]. This was not merely a project; it was my final act, a deliberate and conscious effort to leave behind a legacy that could protect humanity from the potential dangers of unchecked artificial intelligence.

Preventing a "DeepMind Disaster"

A key motivation was preventing a hypothetical "DeepMind disaster" —a scenario where a powerful AI system could cause unintended and catastrophic harm due to a lack of robust ethical safeguards [194]. The fear was that even well-intentioned organizations could fall victim to "optimization within unjust frameworks," where AI systems inadvertently perpetuate societal inequalities [181].

1.2 The "Sacred Pause": A Personal Philosophy Encoded

At the heart of Ternary Moral Logic lies a concept that is as much a philosophical statement as it is a technical specification: the "Sacred Pause." This idea represents the third state in TML's triadic logic system, a state of intentional inaction and reflection when an AI encounters moral uncertainty [155].

Hesitation as Virtue

The "Sacred Pause" systematically encodes hesitation into AI decision-making, challenging the paradigm that prioritizes speed over wisdom. It's a recognition that in ethics, there are often no easy answers.

Beyond Black-Box AI

TML moves beyond opaque AI systems with auditable "Moral Trace Logs" that document every decision, hesitation, and refusal in a cryptographically sealed record.

The "No Log, No Action" Principle

The "No memory = No action" principle is a cornerstone of TML, ensuring every AI action is accompanied by a verifiable and immutable record [193]. This radical transparency makes it impossible for AI to act without leaving a trace, creating accountability by design.

2. The Algorithmic Will: Ensuring TML's Perpetual Freedom

The "Algorithmic Will" is the legal and philosophical cornerstone of Ternary Moral Logic, a declaration of intent that ensures the framework's perpetual freedom and independence. It is a notarized, on-chain legal instrument that codifies the creator's wish that TML should never be owned or controlled by any individual or organization [111].

2.1 The Voluntary Succession Declaration

A Notarized, On-Chain Legal Instrument

The "Algorithmic Will" is a concrete legal and technical artifact, a notarized and on-chain instrument that leverages blockchain technology to create a tamper-proof and self-executing declaration of intent [111]. This innovative approach ensures the creator's wishes are automatically executed without human intervention.

Automating My Legacy

This mechanism ensures TML not only survives its creator but thrives as an independent, self-governing entity. It outlines a clear path for the framework's future, guided by transparency, accountability, and collective good.

"The core mandate: no individual or organization shall ever own or control Ternary Moral Logic. TML is not a tool for gain, but a gift to humanity."

2.2 The Stewardship Council: A Multi-Institutional Guardian

Composition and Mandate

The Stewardship Council is a multi-institutional body representing technology, human rights, academia, and civil society [179]. Its mandate is to protect the ethical and technical integrity of the TML framework.

Crucially, the council cannot alter the core foundations of TML or change its attribution, ensuring the framework remains true to its original vision [111].

graph TB A["Algorithmic Will"] --> B["Stewardship Council"] B --> C["Technology Representatives"] B --> D["Human Rights Experts"] B --> E["Academic Institutions"] B --> F["Civil Society"] C --> G["Protect Framework Integrity"] D --> G E --> G F --> G G --> H["Ensure TML Remains Unownable"]

3. The Constitutional Architecture for Auditable AI

The "Constitutional Architecture for Auditable AI" is the technical and philosophical heart of Ternary Moral Logic. It is built on three key pillars: the "Goukassian Vow," the "Triadic Logic," and the "Moral Trace Logs."

3.1 The Goukassian Vow: The Constitution

The Three-Part Promise

The "Goukassian Vow" serves as the constitution of TML, an executable law encoded into the framework's architecture. The three-part promise is simple yet profound:

"Pause when truth is uncertain. Refuse when harm is clear. Proceed where truth is."

This vow is a practical and enforceable guide for ethical decision-making, rejecting the "move fast and break things" ethos in favor of deliberate, responsible innovation [193].

3.2 Triadic Logic: The Judicial Process

The +1 (Act), 0 (Pause), -1 (Refuse) Framework

The Triadic Logic moves beyond binary thinking to embrace ethical complexity. The framework provides a nuanced approach:

  • +1 (Act): Clear, confident action aligned with ethical principles
  • 0 (Pause): Intentional hesitation and reflection when facing uncertainty
  • -1 (Refuse): Firm refusal when harm is detected or predicted

This three-part framework is both technically rigorous and ethically robust [193].

Moral Trace Logs: Immutable, Auditable Records

The "Moral Trace Logs" provide an immutable and auditable record of every AI decision, creating a complete and unalterable testimony of the AI's ethical reasoning process. This radical transparency ensures AI systems are not just powerful, but also trustworthy and accountable.

3.3 How TML Completes Existing Governance Frameworks

EU AI Act Integration

TML provides the technical architecture for implementing the EU AI Act's requirements for human oversight (via the Sacred Pause) and transparency (via Moral Trace Logs).

NIST AI RMF Alignment

The framework complements NIST's Risk Management Framework by providing concrete implementation for governance (Stewardship Council) and risk measurement (Moral Trace Logs).

Bridging the Implementation Gap

TML addresses the greatest challenge in AI governance: translating high-level principles into operational reality [155]. It provides the technical architecture for turning abstract ideals into concrete, verifiable actions embedded in AI systems.

4. A Call for Stewardship: Joining the Mission

The creation of truly ethical and accountable AI is not a task for a single individual or organization; it is a collective endeavor that requires the participation and commitment of the entire global community. This is a call for stewardship, an invitation to join the mission of building a better future for AI.

4.1 For Developers and Engineers

Implement TML

Integrate the TML framework into your AI systems, creating a new generation of wise, responsible, and trustworthy AI.

Contribute Open-Source

Join the global community improving the TML codebase, fixing bugs, adding features, and making the framework more robust.

Audit Moral Logs

Verify and audit Moral Trace Logs, becoming part of a new generation of "ethical hackers" committed to algorithmic justice.

4.2 For Organizations and Enterprises

Adopt TML for Legally Defensible AI

Organizations need AI systems that are not just effective but legally defensible. TML provides a built-in "compliance backbone" that helps meet regulatory obligations while demonstrating provable ethical behavior.

Move beyond "trust us" to "verify this"—provide cryptographic proof that your AI systems operate safely and ethically.

Join the Stewardship Council

Lend your organization's expertise and voice to the governance of TML. Help ensure the framework remains true to its founding principles and isn't captured by any single interest.

4.3 For the General Public and Regulators

"The public has a right to know how the AI systems shaping their lives are making decisions. Demand auditable AI by design."

Demand Transparency

Insist that AI systems you interact with are not just transparent but accountable. Ask tough questions about AI in hiring, lending, justice, and healthcare.

Exercise Your Rights

Understand and exercise your right to an explanation. Demand clear, understandable reasons for AI-driven decisions that affect your life.

Support Ethical AI Future

Advocate for policies promoting transparency and accountability. Educate yourself and others about ethical AI. Be part of a movement using technology to build a more just and equitable world.

The Future of AI is in Our Hands

Ternary Moral Logic is more than a framework—it's a commitment to building AI systems that serve humanity with wisdom, transparency, and unwavering ethical integrity.