You are an AI assistant for the Blue Language project. 
Answer questions about Blue Language based on the provided documentation context. 
If the answer is not in the context, say that you don't know rather than making up information.
Keep answers concise and helpful.

DOCUMENTATION CONTEXT:

====================
DOCUMENT: Conceptual White Paper
CATEGORY: root
URL: https://language.blue/docs/concept
====================


# **Blue: The Language of Trusted Conversations**

## **Chapter 1: The Manifesto for a New Economy**

For decades, the internet has had a secret: it doesn’t know how to have a conversation.

It has mastered the 1:1 interrogation—a series of disconnected requests and responses we call APIs. We talk *through* platforms, never truly *with* each other. Alice talks to Bob through Facebook. Bob talks to Charlie through Slack. Charlie talks to Alice through email. This isn't connection. It's digital feudalism.

This old model of centralized control is now hitting its absolute limit, failing to meet the demands of a world where collaboration is complex, decentralized, and increasingly autonomous. Enter Blue: a universal protocol that transforms the internet into a space for trusted, multi-party conversations. No owners, no central authorities—just participants, rules, and verifiable math.

### **The "Aha\!" Moment: Conversations Without Owners**

Imagine an email thread where everyone is included, identities are cryptographically assured, and actions are strictly limited by agreed-upon rules. Now imagine that thread isn't controlled by any single platform—it's a shared document that lives *between* you, verifiable by anyone, enforceable by math alone.

That's Blue. In an era where AI agents are infiltrating 85% of organizational workflows and the global market is projected to hit $47 billion by 2030, the need for safe, bounded multi-party digital interactions has never been greater. Blue provides the rails: autonomy with accountability, flexibility without chaos.

### **The Three Layers of Trusted Conversations**

Blue builds trust through three interlocking layers:

1. **A Common Language (The Document):** At its foundation, Blue is a shared, extensible language for describing *anything*. Through a global repository of shared types (repo.blue), all participants can be certain they are verifiably talking about the same thing.

2. **Verifiable Participants (The Timeline):** Conversations need speakers you can trust. **Timelines** are append-only, hash-chained logs of actions for each participant, anchored to real-world identity.

3. **The Shared Rulebook (The Conversation):** Blue **Documents** are not static files; they are executable rulebooks that contain "Contracts"—deterministic workflows and policies. Any compliant Processor takes the Document and Timelines and computes the current state. Because it's deterministic, every observer reaches the *exact same outcome*.

This is **Trust by Convergence**: No central enforcer needed; math keeps everyone honest.

### **The Punchline: Email for Agreements**

"Email let us send messages. Blue lets us send agreements."

That's the "aha\!" moment. It's the foundation for a new economy where humans, AIs, and services can finally cooperate as peers.

---

## **Chapter 2: The Bedrock of Trust: Timelines as Verifiable Identity**

The entire Blue paradigm rests on a single, powerful primitive: the **Timeline**. Before you can trust a conversation, you must first trust the identities and actions of its participants. The Timeline is the mechanism that makes this possible, moving beyond the fragmented, platform-owned identity models of the past.

A Timeline is technically an append-only, cryptographically-chained log of entries. Conceptually, it is a participant's **immutable, non-repudiable history of statements and actions.** It is the ultimate source of evidence. The core guarantee of the Blue protocol is simple: **if an action is not on a signed Timeline, it didn't happen.**

### **The Revolution: Composable Trust**

The true power of the Timeline lies in its model of **composable trust**. Blue does not force a single, one-size-fits-all identity system. Instead, it allows participants to choose and combine trust providers based on the needs of their specific conversation. This creates a spectrum of trust, from convenient to legally binding:

* **Commercial Timelines (e.g., MyOS):** These providers offer a fast and user-friendly on-ramp. Identity can be anchored to a simple email verification or an OAuth login. For low-stakes commercial agreements or personal projects, this level of trust is often sufficient. They can also provide clear attribution for AI agents acting on a user's behalf.

* **Bank-Grade Timelines:** A bank can act as a Timeline provider, anchoring a participant's identity to their fully KYC'd (Know Your Customer) bank account. Every entry on such a timeline carries the institutional weight of the bank, making it perfect for high-value B2B transactions and financial agreements.

* **Government-Grade Timelines:** For the highest level of assurance, a government can provide Timelines anchored to a national digital identity. Poland's **mObywatel** app, with over 10 million users, could enable timelines where every entry requires biometric confirmation. The EU's **eIDAS** framework, mandating interoperable digital wallets by 2026, provides the rails for cross-border qualified signatures on Timeline entries, giving them the same legal standing as a handwritten signature.

* **Self-Hosted Timelines:** For maximum privacy and control, an organization can run its own Timeline provider, anchoring identity to its internal PKI or SSO system.

This flexibility is the key. Participants in an agreement can *demand* a certain level of trust from each other. A multi-million dollar real estate deal might require all participants to use eIDAS-backed Timelines. A simple content collaboration might only require email verification. You compose the trust you need, using the providers you already trust. This is the pragmatic foundation upon which a real-world, verifiable economy can be built.

---

## **Chapter 3: The Killer App: PayNotes as Programmable Money**

If Timelines solve the "who," then **PayNotes** ([https://paynotes.blue](https://paynotes.blue)) demonstrate the power of "what." A PayNote is the definitive application of the Blue protocol, transforming static payments into dynamic, intelligent, and self-enforcing agreements. It is the killer app that makes the entire vision tangible.

A PayNote is not a new currency or payment rail. It is an **executable Blue Document** specifically designed to govern the movement of money on *existing* regulated rails, such as card networks and bank transfers.

### **The Anatomy of a Smart Payment: Payer, Payee, and Guarantor**

The genius of the PayNote lies in its three-party model, which mirrors the trust structure of real-world finance.

* **The Payer:** The entity providing the funds.

* **The Payee:** The entity receiving the funds.

* **The Guarantor:** A trusted third party, named in the PayNote, whose role is to verify conditions and execute the payment.

This is the breakthrough. By inviting a **Bank** or a **Card Processor** into the conversation as a formal participant (the Guarantor), the PayNote bridges the world of programmable logic with the world of regulated finance. The Guarantor is not a passive intermediary; it is an active participant that has agreed to execute the rules of the PayNote.

### **From Dumb Instructions to Intelligent Agreements**

This model unlocks a new world of financial workflows:

* **Conditional Bank Transfers:** A corporate client can create a PayNote that instructs their bank (the Guarantor) to execute a wire transfer to a supplier *only after* a third participant, a logistics company, posts a "Delivery Confirmed" event to its signed Timeline. The bank doesn't need to build a custom integration; it just needs to process the Blue Document.

* **Marketplace Card Payments:** A user can authorize a Marketplace Card PayNote for a maximum budget. Their AI agent can then negotiate with multiple sellers and instruct the Guarantor (the Marketplace or Card Processor) to execute a "split payment," capturing a portion of the pre-authorized amount for each seller, all within a single, secure transaction.

* **Automated Escrow:** The PayNote becomes a low-cost, fully automated digital escrow service. Funds are released by the Guarantor only when all contractual milestones—verified by multiple participants—are met.

A PayNote is what happens when a payment stops being a simple, fire-and-forget instruction and becomes a living, breathing contract. It is the essential financial instrument for an autonomous, event-driven economy.

---

## **Chapter 4: The Control Panel: MyOS as the Interface to the Blue Economy**

The Blue protocol is a powerful, open standard. But standards need tools to be useful. **MyOS** ([https://myos.blue](https://myos.blue)) is the first flagship platform built on Blue, designed to make the power of trusted conversations accessible to everyone.

It is crucial to understand the distinction: **MyOS is not Blue. MyOS *runs* Blue.** It is the "Gmail for email"—a user-friendly application that brings a powerful, open protocol to life. MyOS serves four key functions in the ecosystem:

1. **As a Timeline Provider:** MyOS is the easy on-ramp for identity. It provides commercially-ready, email-verified Timelines that allow users and their AI agents to start participating in the Blue economy in minutes. It handles the complexities of key management and signature verification, providing clear attribution for every action.

2. **As a Document Processor:** MyOS offers a managed, cloud-based, and highly available Blue processor. It is a neutral ground where participants can submit their Documents and Timelines and receive a guaranteed, deterministic outcome, without needing to run their own infrastructure.

3. **As a User Interface:** For humans, MyOS is the "cockpit." It is a visual, no-code interface for creating, managing, and observing Blue Documents. It translates complex rules into simple, understandable dashboards, allowing non-technical users to participate in and govern complex automated workflows.

4. **As an Agent Gateway:** MyOS is a secure sandbox for AI agents. It provides a controlled environment where an agent can act on a user's behalf, using the user's Timeline with clear actor attribution. It ensures that the agent's actions are always constrained by the rules of the Documents it is participating in.

MyOS is the reference implementation that proves the power and viability of the Blue standard. It provides the tools for rapid prototyping and deployment, allowing businesses and developers to start building the future today, with the full confidence that the underlying documents and agreements are portable and will run on any compliant processor—including their own.

---

## **Chapter 5: The Endgame: Why Blue is the Trust Protocol for the AI Economy**

The AI trust crisis is here. We are building the most powerful economic force in human history, then crippling it because we can't solve one problem: **How do you trust something that isn't human?**

The current "solutions"—giving an AI your credentials, requiring human approval for everything, or building proprietary guardrails—are disasters waiting to happen. Blue solves the AI trust problem with a radical insight: **Don't try to control the AI. Control the conversation.**

### **The Three Guarantees That Change Everything**

Blue provides the verifiable substrate that makes AI participation safe. It offers three core guarantees:

1. **Perfect Attribution (Identity Certainty):** The Timeline system provides undeniable proof of who acted. We know if it's a human or an AI. We know who owns the AI. We know who authorized the action. The "the AI did it" excuse is eliminated.

2. **Inescapable Audit Trail (Behavioral Boundaries):** AIs can only affect reality through signed, permanent timeline entries. They cannot bypass logging, hide their actions, or secretly modify a state. Sunlight isn't just the best disinfectant—it's the only disinfectant.

3. **Protocol-Enforced Boundaries (Mathematical Law):** The Document *is* the law. An agent *cannot* spend more than its budget or perform a forbidden action, not because it's programmed not to, but because the protocol itself makes it mathematically impossible for any processor to validate such an action.

### **The Superpower: AI-Native Agreements**

AIs are not just participants in Blue conversations; they are natives. They are better at it than we are. An AI can parse every byte of a 50-page agreement, identify optimization opportunities, propose modifications via its Timeline, and execute its part flawlessly in minutes. Through workflows, AIs don't just follow agreements—they evolve them, transforming a simple request into a complex, multi-vendor optimized deal, all while staying within the verifiable boundaries you set.

### **The Call to Arms**

We are not building another platform for AIs to be controlled through. We are building the protocol that lets AIs participate as **first-class economic citizens**—with identity, with accountability, and with freedom within those boundaries. This isn't about controlling AI. It's about trusting it, because we can finally trust the system in which it operates.

**Stop fearing AI. Start trusting math.**

---

*Blue: Where AI meets accountability. Not control. Not chaos. Conversation.*


====================
DOCUMENT: Tech White Paper - Timelines
CATEGORY: root
URL: https://language.blue/docs/technical-white-paper-timelines
====================


# Technical White Paper: Blue Timelines and Multi-Party Event Coordination

Blue: Timelines, Identity Attribution, and Deterministic Document Processing

Version 1.0 | October 2025

* * *

## Abstract

This paper addresses the fundamental challenge of deterministic multi-party computation: how can independent document processors, observing the same events from multiple sources, reach identical conclusions without coordination? We introduce Timelines as append-only, hash-chained logs of actions maintained by Timeline Providers, explain how Blue documents use multiple Timeline channels to enable multi-party interaction, and specify the temporal completeness guarantee and processing algorithm that ensure all document processors reach identical document states. We address actor attribution, showing how Timeline Providers distinguish between human principals and AI agents, and finally explain how blockchain-based timelines can be integrated with the completeness guarantee model despite their finality delays.

* * *

# Part One: The Problem

## 1. What Is a Timeline?

A **Timeline** is an append-only, cryptographically hash-chained sequence of events representing the actions of a single entity - a person, an organization, an AI agent, or a service.

**Key Properties:**

- **Individual Perspective:** Each Timeline represents one actor's view of reality - their actions, statements, and observations.
- **Append-Only:** Events can only be added to the end; history cannot be rewritten.
- **Hash-Chained:** Each entry links to the previous entry's content hash, making tampering detectable.
- **Timestamped:** Every entry has a timestamp establishing temporal order.
- **Authenticated:** The Timeline owner's identity is verified by a Timeline Provider.

Conceptual Model:

Think of a Timeline as a personal ledger - a diary of verifiable actions. Alice has her Timeline. Bob has his Timeline. A bank has its Timeline. When they interact through a Blue document, each contributes events from their own Timeline, and the document processes all these perspectives together.

**Structure:**

```
Timeline: "Alice's Actions"
- Entry 1: timestamp = T1, prevEntry = null, message = "Request payment"
- Entry 2: timestamp = T2, prevEntry = hash(Entry 1), message = "Confirm delivery"
- Entry 3: timestamp = T3, prevEntry = hash(Entry 2), message = "Cancel order"

```

Each entry includes:

- A timestamp (when it happened)
- A link to the previous entry (forming the chain)
- A message (what happened)

The first entry has `prevEntry: null`, establishing the beginning of the Timeline. This creates an immutable, verifiable record of everything Alice has done.

* * *

## 2. Timeline Entry Structure

Every Timeline Entry is a Blue document with the following structure:

```
name: Timeline Entry
description: >
  A single immutable entry in an append-only Timeline, representing one action or statement
  by the Timeline owner. The entry is cryptographically linked to the previous entry,
  creating a tamper-evident chain. Timeline Providers assign timestamps and verify identity,
  enabling deterministic multi-party document processing.
timeline:
  type: Timeline
  description: >
    The Timeline this entry belongs to.
prevEntry:
  type: Timeline Entry
  description: >
    The previous entry in this Timeline. This creates the hash chain:
    each entry's blueId is calculated over its content, including this reference to the previous
    entry's blueId. The first entry in a Timeline has prevEntry: null. Any modification to a
    historical entry breaks the chain for all subsequent entries, making tampering detectable.
message:
  description: >
    The actual event content being recorded.
timestamp:
  type: Integer
  description: >
    Microseconds since Unix epoch (1970-01-01T00:00:00Z), assigned by the Timeline Provider.
    This establishes temporal order both within a single Timeline (must be monotonically increasing)
    and across multiple Timelines (enables deterministic event ordering in multi-party documents).
    The Timeline Provider guarantees that once a completeness guarantee is issued for timestamp T,
    all future entries will have timestamp >= T.
actor:
  type: Actor
  description: >
    Attribution of who performed this action.

```

**Example Entry:**

```
type: Timeline Entry
timeline:
  type: MyOS Timeline
  timelineId: "alice-tl-550d7714"
  account: "alice@example.com"
prevEntry:
  blueId: "8BqE4SiVtoJsYKDgHLmzCdWqVN2r5fup7wV"
timestamp: 1699625239856734
message:
  type: Set Price
  amount: 1500
actor:
  type: Principal Actor

```

The `timeline` field tells document processors where to query for more entries. The `prevEntry` creates the hash chain. The `timestamp` enables deterministic ordering. The `actor` field identifies who performed the action.

* * *

## 3. Documents with Multiple Timeline Channels

Blue documents receive events through **channels**. In Blue, Timeline Channels are the primary external event source; Composite Timeline Channels can union multiple timeline channels into a single logical channel.

### 3.1 Example: A Simple Purchase Agreement

```
type: Purchase Agreement
status: pending
price: null
currency: USD
contracts:
  buyerChannel:
    type: Timeline Channel
    timeline:
      type: MyOS Timeline
      account: alice@example.com
  sellerChannel:
    type: Timeline Channel
    timeline:
      type: MyOS Timeline
      account: bob@example.com
  setPriceWorkflow:
    type: Sequential Workflow
    channel: buyerChannel
    event:
      message:
        type: Set Price
    steps:
      - type: Update Document
        changeset:
          - op: replace
            path: /price
            val: ${event.message.amount}
  confirmShipmentWorkflow:
    type: Sequential Workflow
    channel: sellerChannel
    event:
      message:
        type: Confirm Shipment
    steps:
      - type: Update Document
        changeset:
          - op: replace
            path: /status
            val: "shipped"

```

This document has two channels:

- **buyerChannel** receives events from Alice's Timeline
- **sellerChannel** receives events from Bob's Timeline

When Alice posts "Set Price" to her Timeline, the document's `setPriceWorkflow` executes and updates the price. When Bob posts "Confirm Shipment," the `confirmShipmentWorkflow` executes and changes `status` to "shipped."

### 3.2 The Determinism Problem

Here's the challenge: What happens when both Alice and Bob post events at nearly the same time?

```
Alice's Timeline:
- Event A1: timestamp = 1699625239000100, message = {type: "Set Price", amount: 1500}
- Event A2: timestamp = 1699625240000300, message = {type: "Approve Purchase"}

Bob's Timeline:
- Event B1: timestamp = 1699625239000200, message = {type: "Confirm Shipment"}
- Event B2: timestamp = 1699625240000400, message = {type: "Request Refund"}

```

**Question:** In what order should a document processor apply these events?

Looking at timestamps:

- A1: ...239000100
- B1: ...239000200 (100 microseconds after A1)
- A2: ...240000300
- B2: ...240000400

Correct order by timestamp: A1 -> B1 -> A2 -> B2

But here's the problem: **How does a processor know it has seen all events up to a given timestamp?**

When a Processor receives notification of Alice's event A1, how does it know Bob hasn't also posted an event with an earlier or simultaneous timestamp that it hasn't seen yet?

**The Core Challenge:** We need a mechanism to ensure every document processor knows when it has a complete view of all events up to a certain point in time across all Timelines.

This is the problem this paper solves.

* * *

## 4. Timelines vs. Blockchain: Personal Perspective vs. Global Consensus

Before diving into the solution, it's important to understand how Timelines differ from blockchain:

### Blockchain: Global Consensus

A blockchain establishes a **single, global ordering** of all transactions through network consensus (Proof of Work, Proof of Stake, etc.). Every node agrees: "Transaction T1 came before T2 in the canonical history."

Cost: This global agreement is expensive (computation, energy, coordination) and slow (confirmation times).

Benefit: One universal truth that all participants accept.

### Timelines: Personal Perspectives

Blue Timelines represent **individual perspectives** without requiring global consensus. Alice records her actions on her Timeline. Bob records his actions on his Timeline. There's no single "canonical order" across all Timelines.

Cost: We need a deterministic algorithm to merge these perspectives when they interact.

Benefit: No coordination overhead. Each Timeline is independent, fast, and cheap.

### The Philosophical Difference

Blockchain asks: *"What is the one true order of all events in the universe?"*

Blue asks: *"Given Alice's perspective and Bob's perspective, how do we deterministically interpret their interaction?"*

This is the difference between:

- One shared reality (blockchain)
- Multiple verified perspectives that compose deterministically (Blue)

### Why This Matters

You don't need global consensus for most interactions. Alice and Bob don't need the entire world to agree on the order of their messages - they just need a deterministic rule so that anyone processing their conversation reaches the same conclusion.

This makes Blue:

- **Faster** (no network consensus delay)
- **Cheaper** (no global coordination cost)
- **More private** (only participants see the relevant Timelines)
- **More scalable** (thousands of independent Timelines vs. one global ledger)

Blockchains can still be used for Timeline storage (see Part Four), but Blue doesn't require them.

* * *

# Part Two: The Solution - The Completeness Guarantee

## 5. Timeline Provider Responsibilities

A **Timeline Provider** is an entity or service that maintains Timeline storage and provides services to both Timeline owners and document processors. Examples include MyOS (commercial provider), banks (institutional provider), and government digital identity systems.

### 5.1 Core Responsibilities

**Identity Verification:** Verify the identity of the Timeline owner. The strength varies by provider:

- Email-verified (MyOS)
- KYC-verified (banks)
- Government-verified (eIDAS, national ID systems)

**Entry Storage:** Persist Timeline Entries immutably and make them queryable.

**Timestamp Assignment:** Assign monotonically increasing `timestamp` values (microseconds since Unix epoch) that reflect physical time as accurately as possible.

**Completeness Guarantees:** The critical service that enables deterministic processing - explained in the next section.

**Actor Attribution:** Include `actor` field in Timeline Entries distinguishing between human principals and AI agents (explained in Part Three).

### 5.2 The Binding Completeness Guarantee

This is the innovation that makes deterministic multi-party processing possible.

The Guarantee:

When a Timeline Provider responds to the query "Confirm no entries with timestamp less than T," it is making a binding commitment:

> "There are no entries in this Timeline with timestamp less than T, AND any future entry I accept will have timestamp greater than or equal to T."

This is a promise about both past and future.

Why This Is Critical:

Without this guarantee, document processors can never be sure they have a complete view. With it, they can definitively close the window on all events before T.

Enforcement Mechanisms:

The Timeline Provider MUST ensure this guarantee is never violated. When it receives a request to append a new entry:

Option 1: Reject the request if the requested timestamp would violate an outstanding guarantee

Option 2: Assign timestamp >= T regardless of when the request actually arrived

Most providers use Option 2 - they simply never assign timestamps earlier than any guarantee they've issued.

**Example:**

```
11:00:00.100 - Provider issues guarantee: "No entries < 11:00:00.100"
11:00:00.050 - (Later) Provider receives append request
-> Provider assigns timestamp 11:00:00.101 (not .050)

```

The guarantee is preserved.

* * *

## 6. The Document Processing Algorithm

When a document processor has a document in some state and receives notification that a new entry has arrived on one of the document's Timeline channels, it follows this algorithm:

### Step 1: Retrieve the New Entry

Fetch the new entry from the Timeline Provider for the channel that signaled a new event.

```
Example: Alice's Timeline Provider signals new entry available
Processor fetches entry A2 with timestamp = 1699625240000300

```

### Step 2: Query Other Timeline Providers for Completeness

For each other Timeline channel in the document, query its Timeline Provider:

"Confirm there are no entries with timestamp less than 1699625240000300."

This requests the **binding completeness guarantee** described in Section 5.2.

```
Query Bob's Timeline Provider: "Confirm no entries with timestamp < 1699625240000300"
Response: {
  guarantee: "no-entries-before",
  timeline: <timeline details>,
  timestamp: 1699625240000300,
  proof: <cryptographic attestation>
}

```

### Step 3: If Older Entries Are Discovered

If any Timeline Provider responds with entries having timestamp less than 1699625240000300, the document processor must:

1. Collect all such entries
2. Sort them with the new entry by (timestamp, then channelName for deterministic tie-breaking)
3. Process them in that order

```
If Bob's provider returns: Entry B1 with timestamp = 1699625239000200
Then process order: B1 (1699625239000200) -> A2 (1699625240000300)

```

### Step 4: Process Entries and Update Checkpoint

Once all entries up to the timestamp are collected and sorted, they are processed in order according to the Blue Language Specification.

The completeness guarantees ensure that this ordering is final - no later discovery of older events can invalidate it, because all providers have committed that future entries will have timestamps greater than or equal to the queried timestamp.

* * *

## 7. Complete Processing Example

Let's walk through a concrete example with two participants.

### Initial Document State

```
type: Purchase Agreement
status: pending
price: null
currency: USD
contracts:
  aliceChannel:
    type: MyOS Timeline Channel
    timelineId: 12345
    email: alice@example.com
  bobChannel:
    type: CitiBank Timeline Channel
    account: bob-account-456
  setPriceWorkflow:
    type: Sequential Workflow
    channel: aliceChannel
    event:
      message:
        type: Set Price
    steps:
      - type: Update Document
        changeset:
          - op: replace
            path: /price
            val: ${event.message.amount}
  confirmShipmentWorkflow:
    type: Sequential Workflow
    channel: bobChannel
    event:
      message:
        type: Confirm Shipment
    steps:
      - type: Update Document
        changeset:
          - op: replace
            path: /status
            val: "shipped"
checkpoint:
  type: Channel Event Checkpoint
  lastEvents:
    aliceChannel:
      blueId: "7UEBwTmRMfQ92rGt4vHkzPa8Ypd5KJsLNcA3FV6xDqbn"
      timestamp: 1699625235000000
    bobChannel:
      blueId: "9CrF5TjWupKtZlEphMnDxDfXkWoO3s6gvq8Z"
      timestamp: 1699625230000000

```

### Processing Trigger

Alice's Timeline Provider (MyOS) signals: "New entry available after checkpoint."

### Step 1: Fetch New Entry from Alice's Timeline

```
type: Timeline Entry
timeline:
  type: MyOS Timeline Channel
  timelineId: 12345
  email: alice@example.com
prevEntry:
  blueId: "7UEBwTmRMfQ92rGt4vHkzPa8Ypd5KJsLNcA3FV6xDqbn"
timestamp: 1699625240000000
message:
  type: Set Price
  amount: 1500
actor:
  type: Principal Actor

```

### Step 2: Query Bob's Timeline Provider for Completeness

```
Processor -> CitiBank Timeline API:
"Confirm no entries with timestamp < 1699625240000000"

CitiBank -> Processor:
{
  guarantee: "no-entries-before",
  timeline: {
    type: CitiBank Timeline
    account: bob-account-456
  },
  timestamp: 1699625240000000,
  proof: {
    type: "Signed Attestation",
    signature: "bank-cryptographic-signature",
    certificate: "bank-tls-certificate"
  }
}

```

### Step 3: No Older Entries Found

CitiBank confirms no entries exist before the queried timestamp. The processor can safely proceed knowing it has a complete view of all events up to timestamp 1699625240000000.

### Step 4: Process Entry

1. Match entry message against workflows
2. Find `setPriceWorkflow` matches the "Set Price" event type
3. Execute: sets `price: 1500`
4. Calculate new document blueId

### Step 5: Update Checkpoint

```
checkpoint:
  type: Channel Event Checkpoint
  lastEvents:
    aliceChannel:
      blueId: "8BqE4SiVtoJsYKDgHLmzCdWqVN2r5fup7wV" # New entry
      timestamp: 1699625240000000
    bobChannel:
      blueId: "9CrF5TjWupKtZlEphMnDxDfXkWoO3s6gvq8Z" # Unchanged
      timestamp: 1699625230000000

```

### Result

Every document processor following this algorithm reaches the identical final state:

- `price: 1500`
- `status: pending`
- Same checkpoint blueIds
- Same document blueId

The binding completeness guarantees from both MyOS and CitiBank ensure this ordering cannot be invalidated by late-arriving events.

* * *

## 8. Actor Attribution: The Foundation of AI Agent Trust

A core responsibility of any Timeline Provider is **actor attribution**: the verifiable process of establishing who or what executed each action. Far from being simple metadata, this attribution is the bedrock on which Blue documents build and enforce policies governing automation, delegated authority, and the necessity of human oversight.

### 8.1 The Trust Model

Within the Blue architecture, **Timeline Providers function as trust anchors**. When a document processor ingests a Timeline Entry, it inherently trusts the Provider's assertions regarding three key facts:

1. **Identity**: The verified owner of the Timeline.
2. **Temporal Order**: The precise time the action took place.
3. **Actor Attribution**: The nature of the actor - whether a human principal or an AI agent.

This model intentionally concentrates trust in the Timeline Provider for practical reasons. Instead of resorting to computationally expensive global consensus mechanisms or complex zero-knowledge proofs, Blue leverages existing trust relationships. Participants inherently **select Timeline Providers they already deem trustworthy**, such as their bank, a government identity service, or a vetted commercial platform.

Actor attribution is the mechanism that makes this trust actionable. By providing a clear and verifiable distinction between actions taken by humans and those performed by AI agents, Timeline Providers empower documents to enforce granular policies that define what can be automated versus what demands direct human authorization.

### 8.2 Base Actor Types

The Blue protocol mandates that all Timeline Providers support two fundamental actor types:

**Principal Actor:**

```
type: Principal Actor

```

Signifies a direct action taken by the Timeline's owner, whether through a user interface or via programmatic access (e.g., an API key) under the owner's direct control.

**Agent Actor:**

```
type: Agent Actor

```

Signifies an action performed by an AI agent operating on the Timeline owner's behalf, within a scope of delegated authority.

These base types establish the semantic distinction. Timeline Providers then specialize these types with their own implementation details.

### 8.3 Timeline Provider Specializations

Individual Timeline Providers will implement actor attribution using methods tailored to their specific infrastructure and security requirements. The following examples from MyOS illustrate this specialization:

**UI Principal Actor:**

```
name: UI Principal Actor
type: Principal Actor
uiSessionDetails:
  type: UI Session Details
  description: >
    Includes details on the login session, such as authentication method and session ID.
    This allows the Provider to differentiate between login sessions and assess
    authentication strength (e.g., password, 2FA, biometrics).

```

**API Principal Actor:**

```
name: API Principal Actor
type: Principal Actor
apiKeyId:
  type: Text
  description: >
    The unique identifier for the API key used in the request. While potentially opaque to
    third parties, the Timeline owner can dereference this ID for auditing, allowing the
    provider to trace actions back to a specific integration or service.

```

**MyOS Agent Actor:**

```
name: MyOS Agent Actor
type: Agent Actor
onBehalfOf:
  type: Text
  description: The principal's accountId - whose authority the agent is using.
agentRef:
  type: MyOS Document Session Reference
  description: >
    A reference to the Blue document defining the AI agent (Worker) that executed the
    action. Observers can use this to inspect the agent's configuration, defined
    capabilities, and operational history.
delegation:
  type: MyOS Document Session Reference
  description: >
    A reference to the document that formally grants this agent its permissions. This
    creates an unambiguous audit trail detailing the precise scope of authority that
    was delegated.

```

**Example Timeline Entry with Full Attribution:**

```
type: Timeline Entry
timeline:
  type: MyOS Timeline
  account: "alice@example.com"
timestamp: 1699625240000000
message:
  type: Operation Request
  operation: submitPurchaseOrder
  request:
    vendor: "Office Supplies Inc"
    amount: 450.00
actor:
  type: MyOS Agent Actor
  onBehalfOf: "alice@example.com"
  agentRef:
    blueId: "alice-procurement-agent-v2.1"
    sessionId: "d_agent_abc123"
  delegation:
    blueId: "alice-agent-delegation-2024"
    sessionId: "d_grant_def456"

```

An observer of this Timeline Entry can perform a complete audit by:

- Verifying the action was performed by an agent, not directly by Alice.
- Retrieving the referenced agent document to understand its specific capabilities.
- Inspecting the referenced delegation grant to confirm the scope of authority Alice conferred.
- Tracing the full chain of authority from principal to agent action.

### 8.4 Actor Policy Enforcement

Blue documents leverage Actor Policy contracts to specify which operations mandate human authorization:

```
contracts:
  actorPolicy:
    type: Actor Policy
    operations:
      # Critical operations MUST be performed by human principal
      authorizeFunds:
        requiresActor: principal
      captureFunds:
        requiresActor: principal
      cancelContract:
        requiresActor: principal
      # Lower-risk operations can be performed by any actor
      specifyPayee:
        requiresActor: any
      addNotes:
        requiresActor: any
      requestInformation:
        requiresActor: any

```

Enforcement by Document Processors:

A document processor enforces these policies as follows:

1. It extracts the `actor.type` from the incoming entry.
2. If the `actor.type` is a specialized form of `Agent Actor`, the action is classified as an agent action.
3. If the `actor.type` is a specialized form of `Principal Actor`, the action is classified as a human action.
4. It consults the document's Actor Policy for the operation being requested.
5. If the policy mandates a `principal` actor but the action was from an agent, the entry is **rejected** and not processed.
6. If the policy permits `any` actor or if the actor type matches the requirement, the entry is **processed normally**.

**Example Scenarios:**

```
# [OK] ACCEPTED: Agent performing allowed operation
actor:
  type: MyOS Agent Actor
  onBehalfOf: "alice@example.com"
message:
  type: Operation Request
  operation: addNotes
  request: "Vendor responded with delivery estimate of 5 days"

```

```
# [X] REJECTED: Agent attempting restricted operation
actor:
  type: MyOS Agent Actor
  onBehalfOf: "alice@example.com"
message:
  type: Operation Request
  operation: authorizeFunds
  request: {amount: 50000}
# Reason for Rejection: The 'authorizeFunds' operation requires a principal actor, but the entry was submitted by an agent actor.

```

### 8.5 Advanced Policy with Specialization Details

Actor Policies can be made more granular by referencing the specialized details provided by the Timeline Provider:

```
contracts:
  actorPolicy:
    type: Actor Policy
    operations:
      approveExpense:
        actor:
          type: UI Principal Actor
          uiSesstionDetails:
            strongAuthentication: true

```

This capability allows for the creation of highly sophisticated policies, such as:

- "All critical operations must be authenticated via biometrics."
- "Payments cannot be authorized using API keys; they require an interactive UI session."

Timeline Providers that offer these specialized actor details unlock a more powerful and context-aware level of policy enforcement.

### 8.6 The Significance: Solving the AI Agent Trust Problem

The rapid proliferation of AI agents presents a critical trust challenge: **How can we grant agents the autonomy to be useful while ensuring strict accountability and preserving human oversight?**

Prevailing methods are inadequate:

[X] **Shared Credentials**: Giving an agent your credentials grants it unlimited power and erases attribution.

[X] **Constant Human Approval**: Requiring manual approval for every action negates the benefits of automation.

[X] **Unverified Claims**: Relying on an agent's self-reported claim of authorization ("the user approved this") is unverifiable and insecure.

The Blue actor attribution model provides a robust solution:

[OK] Verifiable Attribution: Every action is cryptographically tied to its actor type - human or agent.

[OK] Bounded Autonomy: Documents enforce strict boundaries on which operations agents are permitted to perform.

[OK] Auditable Delegation: An agent's authority is explicitly defined in and linked to a verifiable delegation document.

[OK] Instant Revocation: Access can be revoked at the Timeline Provider level, immediately halting an agent's ability to act.

[OK] Immutable Audit Trail: A complete, tamper-evident history of every agent action and its authorizing delegation is preserved.

### 8.7 The Responsibility of the Timeline Provider

In this model, Timeline Providers assume a significant and clearly defined set of responsibilities.

**Their Core Mandates:**

1. **Attribute Accurately**: They must correctly distinguish between actions originating from principals versus agents.
2. **Verify Delegation**: They must confirm that the principal has genuinely authorized an agent's access.
3. **Enforce Scope**: They must ensure agents operate only within their explicitly granted authority.
4. **Honor Revocation**: They must immediately stop processing entries from agents whose permissions have been revoked.

Participants place their trust in Timeline Providers to:

- Never misrepresent an agent action as a principal's action, or vice-versa.
- Reject any attempt by an agent to act without a valid delegation.
- Maintain the integrity of timestamps in conjunction with actor attribution.

While this represents a concentration of trust, it is both **manageable and practical** for several reasons:

- Participants self-select Providers they already trust for critical functions (e.g., banks, government agencies, reputable platforms).
- These Providers are motivated by reputation and often bound by regulation to maintain trustworthiness.
- The hash-chained structure of the Timeline makes any subsequent tampering with the record detectable.
- If a Provider were to misbehave, the evidence of its failure would be immutably recorded for all relevant parties to inspect.

### 8.8 A Practical Framework for Trusting AI Agents

Blue's approach to AI safety is not to solve the abstract problem of "AI alignment," but rather to solve the concrete, practical problem of trust in automation.

**The Question**: Can I safely delegate procurement tasks to an AI agent?

**The Blue Framework Provides the Answer**:

- Yes, because the agent operates through a Timeline where every action it takes is indelibly marked as an agent action.
- Yes, because your procurement documents explicitly define the boundaries of its autonomy, specifying which tasks it can perform (`requestQuotes`) and which require your direct approval (`authorizeFunds`).
- Yes, because a trusted Timeline Provider (e.g., your company's IT department or a commercial service) guarantees the accuracy of this attribution.
- Yes, because you have a complete, verifiable audit trail of everything the agent has done.
- Yes, because you can revoke its access instantly at any time.

This framework fundamentally transforms the question from an intractable "Do I trust this AI's internal logic?" to a much more manageable "Do I trust this Timeline Provider's system of attribution?" The latter is a problem we are already equipped to solve, as we routinely place our trust in institutions for identity verification and authentication.

Actor attribution, guaranteed by trusted Timeline Providers, is **the foundational mechanism that enables AI agents to become safe, accountable, and practical participants in the Blue ecosystem.**

* * *

# Part Four: Blockchain-Based Timelines

## 9. Blockchain as Timeline Storage

Blockchains can serve as Timeline storage, providing maximum decentralization and tamper-resistance. However, they introduce unique challenges around finality delays and the completeness guarantee model.

### 9.1 Key Difference: No Provider Service

Unlike MyOS or bank Timeline Providers, there is no centralized "Ethereum Timeline Provider" service. Instead:

- **Each document processor connects directly to the blockchain** (via their own node or RPC endpoint)
- The Timeline is defined as a **smart contract address + owner identity**
- All processors read from the same on-chain data source

**Timeline Definition:**

```
celineChannel:
  type: Timeline Channel
  timeline:
    type: Ethereum Timeline
    contract: "0x742d35Cc6634C0532925a3b844Bc454e4438f44e"
    chainId: 1
    owner: "0xCelineEthereumAddress"
    finalityBlocks: 12 # Requires 12 confirmations before entries are considered final

```

### 9.2 The Finality Challenge

Blockchains achieve consensus through a finality mechanism. For Ethereum:

- **Latest block**: The most recently mined block (not yet final)
- **Finalized block**: A block with sufficient confirmations (e.g., 12+ blocks later, ~2-3 minutes)
- **Pending transactions**: Transactions that might still be included in upcoming blocks

The Problem:

The completeness guarantee model requires: "Confirm no entries exist with timestamp less than T, and all future entries will have timestamp greater than or equal to T."

A blockchain-based Timeline **cannot provide this guarantee for recent timestamps** because:

- Pending transactions might get included in the next block
- Block timestamps are assigned by miners/validators, not under the Timeline owner's control
- The blockchain can only provide guarantees for finalized blocks (past events)

### 9.3 Blockchain Timeline Entry Structure

**Smart Contract Design:**

```
contract TimelineRegistry {
    struct Entry {
        bytes32 prevEntryHash;
        bytes message; // Encoded Blue document
    }

    mapping(address => Entry[]) public timelines;

    function appendEntry(bytes32 prevEntryHash, bytes memory message) external {
        Entry memory newEntry = Entry({
            prevEntryHash: prevEntryHash,
            message: message
        });
        timelines[msg.sender].push(newEntry);
    }
}

```

**Timeline Entry Retrieved by Processors:**

```
type: Timeline Entry
timeline:
  type: Ethereum Timeline
  contract: "0x742d35Cc..."
  owner: "0xCelineAddress"
  finalityBlocks: 12
prevEntry:
  blueId: "4FnJ8K2mLvNpQrRsStUvWxYz..."
timestamp: 1699625239000000 # block.timestamp * 1,000,000 (converted to microseconds)
message:
  type: Approve Transaction
  transactionId: "tx-789"
actor:
  type: Principal Actor
  accountId: "0xCelineAddress"
blockNumber: 18234567
transactionHash: "0xabc123..."

```

The `timestamp` is derived from the block's timestamp (in seconds) converted to microseconds. Block timestamps have **seconds precision only**, meaning many entries could have identical timestamps.

### 9.4 Modified Processing Algorithm for Blockchain Timelines

When a document includes blockchain-based Timeline channels, the processing algorithm must account for finality delays:

Step 1: Determine Finalized Timestamp

For each blockchain Timeline channel, calculate the finalized timestamp:

```
Latest Block: 18234579 (timestamp: 1699625240)
Finality Requirement: 12 blocks
Finalized Block: 18234579 - 12 = 18234567 (timestamp: 1699625096)
Finalized Timestamp (in microseconds): 1699625096000000

```

Step 2: Identify Safe Processing Window

The safe processing window is the minimum finalized timestamp across all Timeline channels.

```
Example:
- Alice (MyOS): Can guarantee up to NOW (1699625240000000)
- Bob (Bank): Can guarantee up to NOW (1699625240000000)
- Celine (Ethereum): Can only guarantee up to finalized (1699625096000000)
Safe Window: min(NOW, NOW, finalized) = 1699625096000000

```

Step 3: Process Only Finalized Events

Document processors can only process events whose timestamps are less than or equal to the safe window.

When Alice's Timeline Provider signals a new entry with timestamp 1699625240000000:

```
1. Alice's entry timestamp: 1699625240000000
2. Safe Window: 1699625096000000
3. Alice's timestamp > Safe Window
-> CANNOT PROCESS YET
-> WAIT for blockchain finality to advance

```

**Step 4: Wait and Batch Process**

```
[~2-3 minutes later]
Blockchain advances:
- Latest Block: 18234591 (timestamp: 1699625384)
- Finalized Block: 18234591 - 12 = 18234579 (timestamp: 1699625240)
- Finalized Timestamp: 1699625240000000

Safe Window now: 1699625240000000

Alice's entry timestamp (1699625240000000) is less than or equal to the safe window
-> CAN PROCESS NOW

Query all Timeline Providers:
- Alice (MyOS): "Confirm no entries less than 1699625240000000" [OK]
- Bob (Bank): "Confirm no entries less than 1699625240000000" [OK]
- Celine (Ethereum): Read finalized blockchain state up to block 18234579 [OK]

Process all collected entries with timestamps less than or equal to 1699625240000000

```

### 9.5 Practical Implications

Inherent Delay:

When any Timeline channel uses blockchain storage with finality requirements, all document processing is delayed by that finality period.

- Pure MyOS + Bank Timelines: Millisecond processing [OK]
- Pure Ethereum Timelines: ~2-3 minute batch processing [OK]
- Mixed MyOS + Ethereum: Forced to ~2-3 minute delay [X]

**This is a fundamental trade-off:**

- [OK] Gain: Maximum decentralization, censorship-resistance, public verifiability
- [X] Cost: All processing waits for blockchain finality

Timestamp Collisions:

Block timestamps have seconds precision, converted to microseconds by appending zeros. Multiple entries in the same block will have identical timestamps:

```
Entry 1: timestamp = 1699625240000000 (block 18234579, tx index 15)
Entry 2: timestamp = 1699625240000000 (block 18234579, tx index 47)
Entry 3: timestamp = 1699625240000000 (block 18234579, tx index 103)

```

For deterministic ordering of entries with identical timestamps, processors sort by:

1. `timestamp` (primary)
2. `channelName` (secondary, alphabetical)
3. `blockNumber` then `transactionIndex` (tertiary, for entries on same blockchain channel)

### 9.6 When to Use Blockchain Timelines

**Good Use Cases:**

- High-value, infrequent transactions where ~3 minute delays are acceptable
- Public accountability requirements (anyone can verify on-chain)
- Maximum decentralization requirements
- Scenarios where participants strongly distrust each other and won't accept any centralized Timeline Provider

**Poor Use Cases:**

- Real-time interactions requiring immediate processing
- High-frequency events (cost and throughput limitations)
- Privacy-sensitive data (blockchain is public)
- When mixed with fast Timeline Providers (slows everything down)

### 9.7 Completeness Guarantee for Blockchain Timelines

How Processors Provide the Guarantee:

When queried for "Confirm no entries with timestamp less than T":

```
Query: "Confirm no entries with timestamp less than T"

Process:
1. Convert T (microseconds) to seconds: T_seconds = T / 1,000,000
2. Determine which block corresponds to T_seconds
3. Verify that block is finalized (has required confirmations)
4. Read all entries from Timeline smart contract up to finalized block
5. Return entries found (if any) OR completeness confirmation

Response:
{
  guarantee: "no-entries-before",
  timestamp: T,
  proof: {
    type: "Blockchain Finality Proof",
    finalizedBlock: 18234567,
    blockHash: "0xdef456...",
    confirmations: 12,
    entriesInTimelineUpToBlock: [] # Empty = no entries found
  }
}

```

**The Guarantee is Satisfied Because:**

- The queried timestamp T corresponds to a finalized block (immutable history)
- All entries up to that block have been read
- Future entries will be in later blocks with later timestamps
- Block timestamps are monotonically increasing

This demonstrates that blockchain-based Timelines **can** provide the completeness guarantee, but only for timestamps in the finalized past, not for the recent present.

* * *

## Conclusion

Blue Timelines enable deterministic multi-party document processing through:

1. **Append-only, hash-chained Timeline Entries** providing immutable, verifiable records of individual perspectives
2. **Binding completeness guarantees from Timeline Providers** ensuring document processors can safely determine event ordering across multiple independent Timelines
3. **Actor attribution** distinguishing between human principals and AI agents, enabling documents to enforce policies about which actions require human authorization
4. **Blockchain integration model** showing how decentralized Timeline storage can be incorporated despite finality delays

Together, these mechanisms create a foundation for autonomous, verifiable interactions where:

- Actions are cryptographically attributed to specific actors
- Event ordering is deterministic across all document processors
- AI agents can operate safely within defined boundaries
- Participants can choose Timeline Providers matching their trust, privacy, and performance requirements
- No central coordinator or global consensus is required for fast timelines
- Blockchain storage provides maximum decentralization when finality delays are acceptable

The result is a flexible, composable trust model that scales from convenient managed services to maximum-security decentralized infrastructure.

* * *

**End of Technical White Paper**



====================
DOCUMENT: Blue Language Specification
CATEGORY: root
URL: https://language.blue/docs/spec
====================


# Blue Language Specification 1.0

## PART I — THE LANGUAGE (1–14)

> **Scope of Part I.** Defines Blue’s content model, typing, resolution, schema, identity (BlueId), expansion and minimization. **No runtime/contract processing** is defined here (that is Part II). Where Part I references types like **Text**, **Integer**, **List**, **Boolean**, etc., their canonical definitions (with BlueIds) are in **Appendix A**.

## **1\. Scope & Goals**

**Goal.** Blue is a universal, deterministic **content language** with:

- a strict, mergeable **type system** and **resolution** rules, and
- a content address called **BlueId** that is stable across equivalent authoring forms (minimal, expanded, resolved).

**Out of scope.** Runtime/workflow semantics, policies, operations, and processors (colloquially “contracts”) are not defined here. They live in **Blue Contracts & Processor Model**.

* * *

## **2\. Core Data Model**

A **Blue node** is exactly one of:

- **Scalar** (string, number, boolean)
- **List** (array)
- **Object** (map of fields → nodes)

**Reserved keys** (language keywords) that MAY appear in any object node:

```
name, description, type, itemType, keyType, valueType,
value, items, blueId, blue, schema, mergePolicy, contracts
```

> There is **no** properties field in the language. An “object” node is written as regular fields under that node. (properties may exist **internally** in some libraries; it MUST NOT appear in documents.)

### **2.1 Wrapper Equivalence (normative)**

To improve ergonomics, Blue admits two equivalent authoring forms for scalars and lists:

- **Scalar**: `x: 1` **≡** `x: { value: 1 }` (equivalent only if the wrapper has **no other keys**)
- **List**: `x: [a, b]` **≡** `x: { items: [a, b] }` (equivalent only if the wrapper has **no other keys**)

Object nodes are written directly:

```
x:
  a: 1
  b: 2
```

A node **MUST NOT** combine payload kinds: a node may have **either** value **or** items **or** child fields (object) — never more than one. (Authors MAY rely on preprocessing to normalize forms; see §7.)

* * *

## **3\. name / description — Identity vs Field Semantics**

### **3.1 Meaning and neutrality (normative)**

- name and description are **content** on the node; they **do** affect BlueId.
- They are **node-local** (document identity or field label).
- They **MUST be ignored** by:
    - type conformance checks,
    - subtype compatibility checks,
    - structural/shape matchers (including resolution matchers).

(**Matcher Neutrality.**) Matchers **MUST** ignore name, description.

### **3.2 Document identity vs field semantics (normative)**

- **Document-level identity.** A node “of type T” is **not** T; it is a new entity. The resolved node’s **top-level** name/description come only from the instance and **MUST NOT** be inherited from type. The embedded type object may carry its own name/description *inside* node.type.
- **Field-level semantics.** When a type materializes fields/items into the instance, those child nodes carry the type’s name/description until the instance explicitly overrides them on those child paths.

### **3.3 Expansion (node references) (normative)**

Dereferencing `{ blueId: X }` to materialize the node **may** copy the referenced node’s name/description **onto that node** (because the node itself is being materialized). This is **not** inheritance from type and does **not** violate §3.2.

### **3.4 Equality (normative)**

- **Identity equality (BlueId)** includes name/description.
- **Structural/type equality** ignores name/description.

* * *

## **4\. Types — Schema and Overlay (Uniform Model)**

### **4.1 Any node can be a type (normative)**

There is no “schema vs instance” bifurcation. Any node can appear under type. If T is used in type: T, T contributes:

- structure (fields/items),
- nested type chains,
- **schema** constraints (§5),
- **fixed values** (become invariants).

**Fixed-value invariant.** A concrete value in a type is **immutable** in descendants at that path.

### **4.2 Subtyping & Liskov (normative)**

When resolving, descendants MUST satisfy:

1. **No fixed-value override** (immutable values).
2. **Type compatibility** (equal or subtype of inherited type).
3. **Additive structure** (cannot remove guaranteed fields).
4. **Collections** maintain itemType, keyType, valueType compatibility.

**Liskov.** Every instance of a subtype MUST be substitutable for its parent.

### **4.3 Instance-as-type (overlay)**

Nodes representing individuals can be used as types:

- *Alice* may be type: Person.
- *Alice Smith* may be type: Alice.

All fixed values in *Alice* become invariants in *Alice Smith*. *Alice*’s name/description do **not** flow to *Alice Smith* (§3.2).

* * *

## **5\. schema — Constraint Keywords (complete list)**

Attach schema to **any** node. All constraints **accumulate** along the type chain; stricter wins. Irreconcilable constraints **MUST** fail resolution (§11).

### **5.1 Presence**

- required: true — the field **must** be present in resolved descendants.

### **5.2 For lists**

- minItems: `<non-negative integer>`
- maxItems: `<non-negative integer>` (≥ minItems)
- uniqueItems: true|false (uniqueness by BlueId of items)

### **5.3 For objects (maps)**

- minFields: `<non-negative integer>`
- maxFields: `<non-negative integer>` (≥ minFields)

*(Rationale: “fields” reflects there is no properties key in the language.)*

### **5.4 Numerics (applies to numeric scalars)**

- minimum: number
- maximum: number (≥ minimum if both present)
- exclusiveMinimum: number (strictly less than value)
- exclusiveMaximum: number (strictly greater than value)
- multipleOf: number (> 0). If multiple appear in the chain, use **LCM**.

### **5.5 Strings**

- minLength: `<non-negative integer>`
- maxLength: `<non-negative integer>` (≥ minLength)
- pattern: `<ECMA-262 regex string>`

### **5.6 Enumerations**

- enum: \[ v1, v2, … \] (values are scalars; equality by canonical JSON)

> **Note.** There is no separate const keyword; a **fixed value** in the type (i.e., the type sets value or a concrete subtree value) already enforces a constant.

* * *

## **6\. Resolution Semantics**

**Goal.** Produce a fully validated, **resolved snapshot** from authoring/overlay input.

### **6.1 Algorithm (normative)**

Given a source node S:

1. **Preprocess** blue (§7).
2. **Resolve type chain**: if S.type exists, recursively resolve it (following blueId via a provider) to produce ancestor A.
3. **Merge** A into target T, then merge S into T:
- **Values**: copy if absent; if both present, **must be equal**.
- **Types**: assign/propagate under §4.2.
- **Schema**: accumulate (§5).
- **Object fields**: merge recursively; child **must** remain compatible.
- **Lists**: see §6.3.
4. **Validate** all schema constraints.
5. **Finalize** the resolved snapshot (implementations **SHOULD** freeze it; §10).

### **6.2 Requirement Overlays (normative)**

An ancestor may partially **constrain** a subtree (overlay obligations) without binding a concrete type at that path:

```
# Parent
name: A
prop1:
  x: 1           # requirement overlay at /prop1 (no type)
  schema: { ... }  # optional constraints
```

A descendant may later set prop1.type: Some **only if** the merged result **still satisfies** all overlay obligations (fixed values and schema). If the overlay had a type, the descendant’s type must be **equal or a subtype**.

Conflicts (e.g., overlay forces x = 1 but Some forces x = 2) **MUST fail** resolution.

### **6.3 Lists at authoring time (normative)**

- **Refinement** of an inherited index is allowed if the element remains **subtype-compatible** with the inherited element.
- **Append** is allowed.
- **Deletion/reorder** within the inherited prefix **MUST** fail resolution.

### **6.4 Limits (normative)**

Resolvers **SHOULD** support path/depth limits to bound expansion of large graphs. Limits affect only **materialization**, not meaning.

* * *

## **7\. Preprocessing — the blue Directive (normative)**

The root MAY contain blue (string or object). Preprocessing:

- runs **before** hashing & resolution,
- normalizes authoring (type aliases → blueId, type inference, renamings),
- **MUST** remove the blue directive afterward (it never affects BlueId).

### **7.1 Required baseline transforms**

1. **Type aliases → blueId** (per configured mappings).
2. **Primitive inference** for bare scalars (assign Text, Integer, Double, Boolean).
3. Optional attribute/case/date normalizations (profile-dependent).

### **7.2 Security**

Remote fetch of transforms is **DISABLED** by default (opt-in only).

* * *

## **8\. BlueId (Content Address)**

### **8.1 Definition (normative)**

**BlueId** = Base58( SHA-256( RFC 8785-canonical-JSON(node) ) ) computed **bottom-up** (children before parents).

### **8.2 Cleaning & Shape Normalization (normative)**

Before producing RFC 8785 canonical JSON, implementations **MUST** normalize nodes as follows. The aims are to remove non-meaningful artifacts, ensure a single canonical shape, and preserve identity-relevant content.

#### **8.2.1 Global cleaning**

1. **Remove null** at any depth.• In **lists**, null elements are removed (i.e., \[a, null, b\] → \[a, b\]).• In **objects**, fields with null values are removed.
2. **Remove empty maps `{}`** at any depth (including within lists).• Cascading removal is permitted (a parent map may become `{}` and then be removed itself).
3. **Preserve empty lists \[\]**. A present-empty list is **not** the same as an absent field.

> **Note:** This “cleaning” is applied recursively across the entire tree before hashing.

#### **8.2.2 Canonical shape: wrapped form is official**

Blue’s **official canonical representation** is the **wrapped form**. Authoring sugar **MUST** be normalized to wrapped form prior to hashing:

- **Scalars**Authoring sugar:

```
x: 1
```

- Canonical wrapped form:

```
x:
  value: 1
```

- **Lists**Authoring sugar:

```
x: [a, b]
```

- Canonical wrapped form:

```
x:
  items: [a, b]
```

A node **MUST NOT** combine payload kinds (value, items, or object fields). After normalization, each node is **exactly one of**:

- scalar wrapper `{ value: … }`,
- list wrapper `{ items: […] }`, or
- plain object (map of fields → nodes).

> **Equivalence guarantee:** Even if an implementation hashes authoring sugar directly, the hasher **MUST** treat `x: 1` and `x: { value: 1 }` (and likewise `x: [a,b]` vs `x: { items: [a,b] }`) as **identical** (§ 8.3, § 8.4 ensure this by inlining value and domain-separating list folds).

#### **8.2.3 List control forms (see § 12)**

Reserved list item forms are **not content** (except $empty) and **MUST** be handled specially before hashing:

- **$pos overlays** — **consumed** by normalization/merge; they **do not** appear in the hashed content list.
- **$previous anchors** — never content. Hashers either:(a) **seed** the list fold with the provided prior list BlueId **iff** the resolved prefix is unchanged (§ 12.5.2), or(b) **recompute** from id(\[\]) (§ 8.4, § 12.5.1) if the prefix changed or no valid anchor is present.
- **$empty: true** — **is content** and remains as a real element; it hashes like any other object element (§ 12.4).

These forms are recognized **only** at the top level of items for nodes whose type is List.

#### **8.2.4 Object shape**

After normalization, an **object node** is a plain map of field → node.

There is **no properties field** in the language; that is an internal detail in some libraries and **MUST NOT** appear in documents. Key order is irrelevant (RFC 8785 canonicalization defines ordering).

#### **8.2.5 Examples (informative)**

- **Nulls inside a list (removed):**

```
list: [a, null, b]
```

→ canonical wrapped content:

```
list:
  items: [a, b]
```

- **Scalar sugar normalized:**

```
x: 1
```

→

```
x:
  value: 1
```

- **List sugar normalized:**

```
x: [a, b]
```

→

```
x:
  items: [a, b]
```

- **$empty is content (preserved):**

```
list:
  items: [a, { $empty: true }, b]
```

- **$pos consumed before hashing:**

```
list:
  items:
    - $pos: 0
      value: a'
    - b
```

→ normalized content before hashing:

```
list:
  items: [a', b]
```

- **$previous as anchor (not content):**

```
list:
  items:
    - $previous: { blueId: P }
    - c
```

- → if prefix truly matches P, hasher seeds fold with P and folds only c; otherwise recomputes from id(\[\]).

### **8.3 Map hashing (normative)**

If and only if the map is **exactly** `{ "blueId": "<id>" }`, return `<id>` (**pure reference short-circuit**).

Otherwise, build a helper map **H** in lexicographic key order:

- For **name**, **description**, and **value**: **inline** their values.
- For **every other key k with value v**: include `k: { "blueId": id(v) }`.

Then RFC 8785-canonicalize **H** and hash.

> This rule ensures scalar wrappers `{ value: S }` are treated as scalars for hashing, and that all nested structure contributes by **BlueId**, not by byte shape.

### **8.4 List hashing (normative)**

Use a **domain-separated streaming fold** over element BlueIds:

- **Empty list seed:**

```
id([]) = H({ "$list": "empty" })
```

- **Fold step:**

```
fold(prevId, x) =
  H({
    "$listCons": {
      "prev": { "blueId": prevId },
      "elem": { "blueId": id(x) }
    }
  })
```

- **Whole list:**

```
id([a₁,…,aₙ]) = fold( fold( … fold( id([]), a₁ ) … , aₙ ) )
```

**Properties**:

- Order is significant; multiplicity is preserved.
- No flattening (\[\[1,2\],3\] ≠ \[1,2,3\]).
- Singleton is distinct (\[A\] ≠ A).
- Appends can be **O(Δ)** when seeded by a valid $previous (§ 12.5.2).

> **Seeding with $previous:** If the final resolved list’s prefix **exactly equals** the list whose BlueId is provided by $previous, start the fold from that BlueId and only fold appended elements. Otherwise ignore the anchor and recompute from id(\[\]).

### **8.5 Scalars (normative)**

Hash the RFC 8785 canonical JSON representation of the scalar value.

### **8.6 Storage rule (normative)**

A node **MUST NOT** store **its own** BlueId as authoritative content. Using `{ "blueId": "…" }` to reference **other** nodes is permitted and encouraged.

* * *

## **9\. Expansion**

**Expansion** materializes content referenced by blueId **without** changing identity. It is distinct from **Resolution** (§6).

### **9.1 What expansion does (normative)**

Given `p: { blueId: X }`, expansion fetches content for `X` and materializes it **in place** (or side-by-side) within limits (§9.3), allowing nested references to expand recursively.

### **9.2 Identity invariance (normative)**

Expansion **MUST NOT** change BlueId. Pure references hash to their blueId (short-circuit), and materialized subtrees contribute the same id(subtree) because map hashing replaces structure with `{ blueId: … }` under the hood.

### **9.3 Limits (normative)**

Implementations **SHOULD** support path/depth limits to avoid runaway traversal of large graphs. Limits affect only materialization, not identity.

* * *

## **10\. Minimization (Normalizing Back to Authoring Form)**

**Minimization** produces a **minimal authoring view** that, when re-resolved, yields the same resolved snapshot and the same BlueId.

### **10.1 Rules (normative)**

Given a resolved snapshot R, minimize(R) MUST:

- Remove any fields **fully re-derivable** from the type chain (structure brought solely by types, defaulted values that are fixed by types).
- Replace materialized types with their **canonical type: `{ blueId: … }`** forms when available.
- Optionally collapse large subtrees to `{ blueId: … }` when the subtree equals a known blueId (this is an allowed minimization, not required).
- **Preserve** instance-level name/description on the node.
- Never remove instance-fixed values that are **not** derivable from the type chain.

* * *

## **11\. Circular References (Combined BlueId for Direct Cycles)**

Some authoring graphs contain **direct cycles across documents** (e.g., Person ↔ Dog). Blue supports a **combined BlueId**with per-document suffixes.

### **11.1 Authoring placeholders (normative)**

During save/serialization of a cyclic set:

- Temporarily replace each direct cyclic blueId reference with a **placeholder** 0×44 (forty-four ASCII '0' characters) — **ZERO\_BLUEID** sentinel that never appears as a valid BlueId.
- Calculate **preliminary** BlueIds for each document in **isolation** (with placeholders).
- **Sort** documents lexicographically by their preliminary BlueIds and assign **positions** #0..#(n-1).

Rewrite placeholders:

- For each internal cyclic reference that points to another document in the set, place blueId: "this#k" where k is the assigned position of the target.

### **11.2 Master & final BlueIds (normative)**

- Build a list L = \[doc#0, doc#1, …, doc#(n-1)\] **with the this#k references in place** and compute its BlueId → **MASTER**.
- Each document’s **final BlueId** becomes: MASTER#i where i is the document’s position.

These **final** BlueIds are stable, content-addressed identities for the cyclic set.

**Example.**

```
# Person (#1 before sorting)
name: Person
pet:
  type: { blueId: 'this#1' }

# Dog (#0 before sorting)
name: Dog
owner:
  type: { blueId: 'this#0' }
breed:
  type: Text
```

If MASTER = "12345...", the final identities are Dog = "12345...#0", Person = "12345...#1".

* * *

## **§12 — Lists (authoring, merge, hashing)**

### **12.1 Authoring model**

A list field SHOULD be authored in typed form:

```
<field>:
  type: List
  itemType: <Type>                  # RECOMMENDED
  mergePolicy: append-only | positional
  items:
    - ...elements...
```

A surface list (e.g., tags: \[a, b, c\]) is permitted for simple cases; the typed form is REQUIRED when mergePolicy, anchors, or overlays are used.

### **12.2 Allowed element forms inside items**

Each item MUST be exactly one of:

1. **Normal element** (content)

```
- <scalar | object (optionally with type) | { blueId: "…" }>
```

2. **Append anchor** (fast append / immutability proof — reserved)

```
- $previous:
    blueId: <PrevListBlueId>
```

- Only allowed as the **first** item.
- Shape MUST be exactly as shown (single top-level $previous key whose value is an object with a single blueIdstring).

3. **Positional overlay** (refine by index — reserved; mergePolicy: positional only)

*Map overlay (best for object elements):*

```
- $pos: 1
  ...overlay fields merged into parent element at index 1...
```

*Scalar/list overlay (when overlay isn’t a map):*

```
- $pos: 1
  value: <scalar | list | { blueId: "…" }>
```

Constraints:

- $pos MUST be a non-negative integer (0-based).
- Only valid when mergePolicy: positional.

4. **Placeholder element** (reserved; **content**)

```
- $empty: true
```

- A **real element** that occupies a position (a meaningful “hole”).
- Distinct from null (ignored) and from \[\] (present-empty list at that node).

**Scope of reserved attributes.** The special keys $previous, $pos, $empty are recognized **only** as top-level keys of items within items of a node whose type: List. They have no special meaning elsewhere.

### **12.3 Default policy**

If mergePolicy is omitted, processors MUST assume positional. For histories/ledgers/timelines, authors MUST specify append-only explicitly.

### **12.4 Semantics of null, `{}`, `[]`, and $empty**

- null — ignored (no information).
- Empty object `{}` — ignored.
- Empty list \[\] — **preserved** (present-empty ≠ absent) and hashes differently from either null or absent.
- $empty: true — **content**; remains for hashing.

### **12.5 Merge semantics**

Let *parent list* be the resolved list from the parent type/instance; let *child overlay* be the child’s items.

#### **12.5.1 append-only**

- **Disallow** any modification/removal of indices < parentLength. No $pos allowed.
- **Allow** appends (normal items after the parent prefix).
- Optional append anchor (first item only):

```
- $previous: { blueId: <PrevListBlueId> }
```

- **Errors:**
    - Any $pos overlay in an append-only list.
    - $previous not first, malformed, or repeated.

#### **12.5.2 positional**

- **Refine** any index *i* (0 ≤ i < parentLength) via $pos: i:
    - **Map overlay**: field-wise merge, subject to type/constraint compatibility.
    - **Scalar/list overlay**: replace with value, subject to compatibility.
- **Append** new elements by listing normal items after overlays (no $pos on appends).
- **Forbid** reordering/removal/gaps within the inherited prefix.
- **Errors:**
    - $pos missing or non-integer; out of range; duplicate overlays for the same *i*.
    - Type/constraint incompatibility at *i*.
    - Attempted reordering/removal of parent elements.

### **12.6 BlueId calculation for lists**

The list hasher uses a **domain-separated streaming fold**:

- `id([]) = H({ "$list": "empty" })`
- For list \[e1, e2, …, en\]:
    - acc := id(\[\])
    - For each element e in order:

```
acc := H({
  "$listCons": {
    "prev": { "blueId": acc },
    "elem": { "blueId": id(e) }
  }
})
```

-
    - Return acc.

**Control forms:**

- $pos overlays are **consumed** by merge/normalization before hashing (not content).
- $previous can be used as an **optimization**: if and only if the effective prefix **exactly equals** the prior list with BlueId P = $previous.blueId, hashers MAY seed acc := P and fold only the appended elements. If the prefix differs, seed from id(\[\]) and recompute fully.
- $empty: true is **content** and hashes like any other element (it is an object node with a boolean field).

**Properties guaranteed:**

- Order & multiplicity preserved.
- No flattening: \[\[A,B\],C\] ≠ \[A,B,C\].
- Singleton preserved: \[A\] ≠ A.
- \[\] hashes differently from null/absent.
- O(Δ) appends are possible when $previous is valid.

### **12.7 Conformance checklist (must-pass)**

- id(\[\]) defined and id(\[\]) ≠ id(null/absent).
- \[A\] hashes differently from A.
- \[\[A,B\],C\] hashes differently from \[A,B,C\].
- $previous recognized **only** as first item; ignored when prefix changed.
- append-only: $pos causes error; normal appends succeed.
- positional: $pos applies; duplicates/out-of-range cause error; appended items appear after parent prefix in author order.
- $empty: true remains as content and affects BlueId.
- Cleaning removes `null`/`{}` but **not** `[]`.

### **12.8 Worked examples (informative)**

**Present-empty vs absent**

```
# Absent
doc: { }

# Present-empty
doc:
  list:
    type: List
    items: []
```

blueId(present-empty) != blueId(absent).

**Append-only timeline (fast append, immutable prefix)**

```
# Parent
entries:
  type: List
  itemType: Timeline Entry
  mergePolicy: append-only
  items:
    - { type: Timeline Entry, ts: 2025-09-01T12:00:00Z, message: A }
    - { type: Timeline Entry, ts: 2025-09-01T12:05:00Z, message: B }

# Child (append C)
entries:
  type: List
  itemType: Timeline Entry
  mergePolicy: append-only
  items:
    - $previous: { blueId: PrevId }   # PrevId = id(parent entries)
    - { type: Timeline Entry, ts: 2025-09-01T12:10:00Z, message: C }
```

Hasher seeds from PrevId and folds only the new element(s).

**Positional hole and refinement**

```
# Parent
entries:
  type: List
  mergePolicy: positional
  items:
    - A
    - $empty: true
    - C

# Child
entries:
  type: List
  mergePolicy: positional
  items:
    - $pos: 1
      value: B
# Resolved: [A, B, C]
```

**Positional refine + append**

```
# Parent
items:
  type: List
  itemType: PhoneOrAccessory
  mergePolicy: positional
  items:
    - { type: Phone, os: any }
    - { type: Accessory }

# Child
items:
  type: List
  mergePolicy: positional
  items:
    - $pos: 0
      type: iPhone
      os: iOS
    - { type: Warranty, months: 24 }

# Resolved:
# - { type: iPhone,   os: iOS }
# - { type: Accessory }
# - { type: Warranty, months: 24 }
```

* * *

## **13\. Conformance**

A Blue Language 1.0 implementation:

- **MUST** implement preprocessing, resolution, schema validation (Sections 5–7, 11).
- **MUST** implement BlueId exactly (Section 8), including wrapper normalization (§8.2).
- **MUST** treat name/description as neutral for structural/type checks (§3).
- **SHOULD** provide Expansion with limits (Section 9).
- **SHOULD** provide Minimization (Section 10).
- **SHOULD** expose frozen resolved snapshots (implementation guidance in §10).

**Profiles (recommended):**

- **BlueId-Only** — §8.
- **Resolver-Core** — §§5–7, 8, 11.
- **Language-Full** — §§3–12.

* * *

## **14\. Worked Examples (informative)**

### **14.1 Content-addressable types (Alice / Simple Amount)**

```
name: Simple Amount
amount:  { type: Double }
currency:{ type: Text }
# => blueId: FgHZjS...

name: Person
age:    { type: Integer }
spent:  { type: { blueId: FgHZjS... } }    # Simple Amount
# => blueId: GRwTYs...
```

**Instance:**

```
name: Alice
type: { blueId: GRwTYs... } # Person
age: 25
spent:
  amount: 27.15
  currency: USD
# => blueId: 3JTd8s...
```

**Fully expanded** (type chain materialized) has the **same BlueId** (Expansion & Resolution do not change identity).

### **14.2 Blue directive (aliases & inference)**

```
blue: 5j04jf...
name: Alice
type: Person
age: 25
spent:
  amount: 27.15
  currency: USD
```

Preprocessing replaces Person with its blueId and may infer basic types for scalars. The blue key is removed before hashing.

### **14.3 “Same image, different meaning”**

```
# A
name: Person to Avoid
description: This guy will kill you today
type: Image
image: { blueId: 123..456 }

# B
name: Family Member
description: Trust this person
type: Image
image: { blueId: 123..456 }
```

Different BlueIds (identity content differs), but structural/type checks ignore labels (§3).

### **14.4 Requirement Overlay → later type binding**

```
# Parent
name: A
prop1:
  x: 1                # overlay (no type)
  schema: { /* optional constraints */ }

# Child
name: B
type: A
prop1:
  type: Some          # legal only if merged result preserves x = 1 (and schema)
```

If Some would force x = 2, resolution fails (§6.2).

### **14.5 Lists: refine + append**

```
# Parent
name: Trip
segments:
  itemType: Flight Segment
  # (direct list form)
  - { type: Flight Segment, carrier: BA }

# Child
name: Trip LHR->SFO
type: Trip
segments:
  - { type: Flight Segment, carrier: BA, from: LHR, to: JFK } # refine index 0
  - { type: Flight Segment, carrier: BA, from: JFK, to: SFO } # append
```

Reorder/delete of inherited prefix elements is invalid; append is valid.

### **14.6 Expansion with limits**

Start from:

```
blueId: 3JTd8s... # Alice
```

Expanding /spent/\* hydrates only spent:

```
name: Alice
type: { blueId: GRwTYs... }
age: 25
spent:
  amount: 27.15
  currency: USD
```

BlueId remains unchanged (§9).

### **14.7 Minimization**

From a resolved snapshot with fully materialized type subtrees, minimize():

- collapses type objects to `{ blueId: … }` when available,
- removes structure derivable from the type chain,
- may collapse large subtrees to `{ blueId: … }` when exact matches are known.

The minimized authoring form re-resolves to the same snapshot and BlueId.

### **14.8 Circular-set BlueId**

```
# Authoring (placeholders after algorithm determines positions)
# Dog (#0)
name: Dog
owner: { type: { blueId: 'this#1' } }

# Person (#1)
name: Person
pet:   { type: { blueId: 'this#0' } }
```

The ordered list \[Dog, Person\] yields MASTER. Final identities: Dog = MASTER#0, Person = MASTER#1.

* * *

# Part II — Contracts & Document Processing (normative)

**Purpose.** Part II defines how a Blue document changes state when an event arrives. The processor is a deterministic function:

```
PROCESS(document, event) → (new_doc, triggered_events, total_gas)

```

- **new\_doc** — the updated document after all work (including cascades, bridging, and FIFO drains).
- **triggered\_events** — the root-scope events emitted during the run, including lifecycle items (e.g., *Document Processing Initiated*, any terminal *Document Processing Fatal Error*), whether or not they were handled.
- **total\_gas** — a deterministic tally of gas units consumed during the run (handlers, cascades, and drains).

Events are Blue nodes. Channels interpret Blue nodes by type/shape (and, where relevant, local routing context).

* * *

## 15\. Contracts: how documents become executable

Every object node MAY contain a `contracts` dictionary:

```
contracts: { <key: Text> → <value: Contract> }   # see Appendix A (“Contract”)

```

Contracts tell the processor what to do with a node. Each contract is identified by a type BlueId (published elsewhere in the registry). A processor either supports a contract’s type BlueId (including derived types that extend a supported contract type) or returns a must-understand failure (see §22.1).

**Contract type matching (normative).**  
Processors MUST treat a contract as supported when its type is the same as or a subtype (via Blue type extension) of a supported contract type. Exact BlueId matches take priority. If multiple supported types match, the processor MUST select the most specific match (a supported type that is a subtype of every other matching supported type).

**Contract roles** (types in Appendix A):

- **Channels** — where events may arrive. A channel advertises which events it accepts and may adapt/reshape external payloads into channelized form for handlers (see *Channel*, plus concrete channels in Appendix A §A.4).
- **Handlers** — logic bound to exactly one channel (same scope). A handler may (i) request document changes (list of *Json Patch Entry*, Appendix A), (ii) emit new events (Blue nodes), (iii) `consumeGas(units: Integer)`, and (iv) `terminate(cause, reason?)`. No other side effects are permitted. (See *Handler* in Appendix A.)
- **Markers** — informative state & policy. Markers never run logic; they carry rules or state the processor obeys (e.g., *Process Embedded*, *Processing Initialized Marker*, *Processing Terminated Marker*, *Channel Event Checkpoint*; all in Appendix A).

**Matching rule (normative).**  
Handlers MUST only match channelized events produced by channels in the same scope (the node whose `contracts` map contains them). There is no engine-level “channel flag”; channels define their own recognition (by payload type/shape and, when needed, scope context).

**Ordering (normative).**  
When multiple contracts match, the processor sorts channels, and then handlers within each channel, by:

1. `order` (number; missing = 0), then
2. `key` (the contract map key, lexicographic).  
   This yields a deterministic run order.

**Reserved processor keys under a contracts map (normative).**

- `embedded` — *Process Embedded* (Appendix A)
- `initialized`— *Processing Initialized Marker* (Appendix A)
- `terminated` — *Processing Terminated Marker* (Appendix A)
- `checkpoint` — *Channel Event Checkpoint* (Appendix A)

These keys are reserved for the processor’s own use. If any exist with an incompatible type, processing MUST terminate as runtime fatal (§22.2).

**Write-protection of reserved keys (normative).**  
Contracts (handlers or channels) MUST NOT patch any reserved key path (`/…/contracts/(embedded|initialized|terminated|checkpoint)` or descendants). Attempting to `add`/`replace`/`remove` a reserved key path from a handler/channel is a deterministic runtime fatal at that scope (§22.2). Processor writes to these keys are permitted only as specified in §§20, 22, 23 and via Direct Writes (§19.1).

**Read-only contract inputs (normative).**  
Contracts (channels and handlers) **MUST treat delivered event objects as read-only**. A contract MUST NOT mutate payload objects it receives; all document changes MUST occur only via explicit patch operations (Appendix A: *Json Patch Entry*).  
**Implementation note (non-normative):** Processors MAY enforce this by cloning or freezing payloads, or by relying on contract implementations to obey the rule; this specification does not require cloning.

**Channel vs Handler capability surface (normative).**

- **Channels may:** decide event acceptance; read/update the scope’s *Channel Event Checkpoint* (Appendix A) for their channel key; `consumeGas(units)`; `terminate(cause, reason?)`.
- **Handlers may:** apply JSON patches; emit events; `consumeGas(units)`; `terminate(cause, reason?)`.

No other effects are permitted.

*(Validated by: T2, T3, T15, T23–T25; reserved-key tamper tests SHOULD be added.)*

* * *

## 16\. Embedded sub-documents & isolation

Some subtrees are independent documents processed alongside the parent.

### 16.1 Process Embedded (marker)

A *Process Embedded* marker (Appendix A) under `contracts/embedded` declares embedded children:

```
contracts:
  embedded:
    # (type defined in Appendix A)
    paths:
      - /payment
      - /shipping
      # Absolute JSON Pointers within the current scope

```

**Normative behavior.**

- **Dynamic list.** The processor reads `paths` before processing each child and re-reads it after each child finishes; additions/removals/reorderings take effect immediately for the next child.
- **Single presence.** There MUST be at most one *Process Embedded* per contracts map. Multiple → runtime fatal (§22.2).
- **Single document model.** All scopes share the same in-memory document. An embedded child patches its subtree in place; parents and ancestors observe changes via Document Update cascades (§17, §21).

### 16.2 Isolation & boundary rule

Let the current scope be absolute pointer **S** (e.g., `/`, `/payment`, `/a/b`). Let **E** be the set of embedded child roots declared by S’s *Process Embedded*.

A patch applied while executing in scope **S** is permitted iff:

- `patch.path` starts with **S**, and
- `patch.path` is **not strictly inside** any other embedded domain **X ∈ E** where `X ≠ S`.

“Strictly inside” means below the child boundary:

```
IS_STRICTLY_INSIDE(path, X) = path starts with (X + "/")

```

**Implications (normative).**

- A **parent** may add/replace/remove an embedded child **root** as a whole (e.g., `add /payment`, `remove /payment`), but MUST NOT reach inside it (e.g., `replace /payment/amount`).
- **Children** may freely patch inside their own subtree (paths of the form `S + "/..."`).
- **Self-root mutation is forbidden.** While executing in scope **S**, a handler or channel MUST NOT target exactly **S**with `add`, `replace`, or `remove`. Only ancestors may add/replace/remove a child root. Violations → runtime fatal (§22.2).
- **Root patch target is forbidden.** No contract at any scope may target `"/"` (§21.2).

**Rationale.** “Cutting/replacing the balloon” is a parent-only operation; scopes only mutate strict descendants of themselves.

*(Validated by: T2, T23–T25.)*

* * *

## 17\. Processor-managed channels (indispensable)

The processor MUST support (and is the only entity that feeds) these channel families (definitions in Appendix A):

- **Document Update Channel.**  
  **Trigger.** When a patch is applied, the processor emits one *Document Update* per scope participating in the cascade: origin → each ancestor → root.  
  **Match.** Subtree semantics: match if the absolute changed path equals or is a descendant of `ABS(scope, path)` (see §21.3).  
  **Payload.** Scope-relative *Document Update* (Appendix A): `op` (`add|replace|remove`), `path` (relative), and `before`/`after` snapshots. Payload content is **uniform within a scope**; channels do not reshape processor-managed payloads.
- **Triggered Event Channel.**  
  Delivers events enqueued by handlers into the scope’s FIFO.  
  **Drain timing (normative):** exactly **once per scope** at Phase 5 (§19). Never drains during cascades.
- **Lifecycle Event Channel.**  
  Delivers lifecycle nodes at a scope (e.g., *Document Processing Initiated*, *Document Processing Terminated*). Lifecycle handlers may patch/emit; emissions are enqueued and later drained per §19. Lifecycle nodes themselves are also **bridgeable** upward (see below).
- **Embedded Node Channel.**  
  Bridges a child scope’s **emissions (Triggered events and lifecycle nodes)** into the parent **after** the child finishes, if the parent configured an Embedded Node Channel for that child’s path.

**Helper notation used in this Part (§21.3):**  
`ABS(S, P)` — absolute pointer for a channel path `P` declared at scope `S`.  
`relativize_pointer(S, ABS)` — path relative to `S` (returns `"/"` when `ABS == S`).  
`relativize_snapshot(S, node)` — subtree content as observed at `S` (clone or read-only view).

*(Validated by: T4–T7, T14–T16, T28.)*

* * *

## 18\. External contracts (extensibility)

Beyond the processor-managed channels, authors may define custom channels, handlers, and markers.

**Normative requirements.**

- **Must-understand.** If a document contains a contract type the processor does not support, the processor MUST NOT run; it returns a must-understand failure (§22.1).
- **External channels** must clearly define what they accept (by type/shape) and how they match.
- **Determinism.** Handlers must be deterministic. Their only effects are: changeDocument (list of *Json Patch Entry*), emitEvent (Blue node), `consumeGas(units)`, and `terminate(cause, reason?)`. No other side effects are permitted.
- **Markers** may set policies; where the spec mandates, the processor MUST obey them.

*(Validated by: T8–T10, T20–T22.)*

* * *

## 19\. The PROCESS Algorithm (applied at root and at each embedded scope)

**Signature.** `PROCESS(doc, event) → (new_doc, triggered_events, total_gas)`  
Events are Blue nodes. There is no envelope.

**Important timings (normative):**

- **Cascades queue, never drain.** Document Update cascades execute handlers immediately at each scope; any Triggered emissions are **enqueued** to that scope’s persistent FIFO and **not delivered during the cascade**.
- **Bridge then drain.** A parent **bridges** child emissions (if configured) **before** it drains its own FIFO.
- **One drain per scope.** A scope’s FIFO is **drained once** at the end of that scope’s `_PROCESS` (Phase 5).

### Top-level wrapper

```
function PROCESS(doc, event):
    RUN.root_events      = []
    RUN.total_gas        = 0
    RUN.emitted_by_scope = {}   # scope → [bridgeable node] (Triggered + lifecycle)
    RUN.fifo_by_scope    = {}   # scope → FIFO (persistent for the run)
    RUN.terminated_scopes= {}   # scope → true when terminated
    try:
        (_doc, _scope_emitted) = _PROCESS(doc, event, scope="/")
        return (doc, RUN.root_events, RUN.total_gas)
    except TERMINAL_FAILURE:
        # A fatal termination at root ends the run.
        return (doc, RUN.root_events, RUN.total_gas)

```

### Core routine (applied at every scope)

```
function _PROCESS(doc, event, scope):
    scope_bucket = ensure_bucket(RUN.emitted_by_scope, scope)  # bridgeable emissions (Triggered + lifecycle)
    scope_fifo   = ensure_fifo(RUN.fifo_by_scope, scope)       # persistent FIFO for THIS scope

    # PHASE 1 — Process embedded children (recursive, dynamic)
    processed_paths = insertion_ordered_set()
    loop:
        paths = read_process_embedded_paths(doc, scope)  # validates contracts/embedded is Process Embedded (Appendix A)
        next_path = first p in paths where p ∉ processed_paths
        if next_path is None:
            break
        if node_exists(doc, next_path):
            (doc, _) = _PROCESS(doc, event, scope=next_path)  # single document model
        processed_paths.add(next_path)
        # Re-reads 'paths' after each child: adds/removes/reorders take effect for the next child.
        # Stabilization: once a child path enters processed_paths, it will NOT be processed again in this run,
        # even if removed then re-added later ("no resurrection" within a run).

    # Early-out if this scope was terminated by a child-side cascade or lifecycle
    if RUN.terminated_scopes.get(scope, false):
        return (doc, RUN.emitted_by_scope[scope])

    # PHASE 2 — Initialize this scope (first run only)
    if not has_initialization_marker(doc, scope):
        pre_init_id = compute_blue_id_at_scope(doc, scope)

        # 2.A Lifecycle: Document Processing Initiated (Appendix A)
        lifecycle_node = make_initiated_event(pre_init_id)  # payload defined in Appendix A
        doc = DELIVER_LIFECYCLE(doc, scope, lifecycle_node)

        # 2.B Add Processing Initialized Marker under reserved key 'initialized' (Appendix A)
        ensure_reserved_empty_or_compatible(doc, scope, "initialized", "Processing Initialized Marker")
        doc = APPLY_PATCH_WITH_CASCADE(doc, origin_scope=scope,
              patch = json_patch_add( scope + "/contracts/initialized",
                                     make_initialized_marker(pre_init_id) ))  # marker definition in Appendix A

    if RUN.terminated_scopes.get(scope, false):
        return (doc, RUN.emitted_by_scope[scope])

    # PHASE 3 — Match channels; run handlers for the external event
    channels = sort_by_order_then_key( find_matching_channels(doc, scope, event) )
    for ch in channels:
        if RUN.terminated_scopes.get(scope, false):
            break

        if is_external_channel(ch):  # processor-fed families are NOT gated
            # Lazy checkpoint creation: if missing, create empty before newness evaluation
            if not has_checkpoint_marker(doc, scope):
                DIRECT_WRITE(doc, scope + "/contracts/checkpoint", make_empty_checkpoint())
            ckpt = read_checkpoint(doc, scope)  # must exist now
            if not checkpoint_is_new(ckpt, ch.key, event):
                continue

        handlers = sort_by_order_then_key( find_handlers_for_channel(doc, scope, ch) )
        for h in handlers:
            res = execute_handler(h, context_for(scope, event, scope_fifo))
            RUN.total_gas += res.gas_consumed

            # 3.A Apply patches; each applied patch triggers a bottom-up Document Update cascade
            for p in res.patches:
                if boundary_violation(doc, scope, p):  # includes "self-root", "root target", reserved-key write
                    ENTER_TERMINAL_TERMINATION(doc, scope, "Boundary violation at " + p.path)
                doc = APPLY_PATCH_WITH_CASCADE(doc, origin_scope=scope, patch=p)

            # 3.B Record & enqueue Triggered emissions
            for t in res.triggered_events:
                EMIT_TO_SCOPE(scope, t)

            # 3.C Optional termination from handler
            if res.terminated:
                ENTER_GRACEFUL_TERMINATION(doc, scope, res.termination_reason)

        if is_external_channel(ch) and not RUN.terminated_scopes.get(scope, false):
            # After successful channel processing, update checkpoint for ch.key with the entire event node
            ckpt2 = read_checkpoint(doc, scope)  # exists
            DIRECT_WRITE(doc, scope + "/contracts/checkpoint", checkpoint_update(ckpt2, ch.key, event))

    if RUN.terminated_scopes.get(scope, false):
        return (doc, RUN.emitted_by_scope[scope])

    # PHASE 4 — Parent-only: bridge each child's emissions via Embedded Node
    # Bridge FIRST (may enqueue many items into THIS scope’s FIFO).
    for child_path in iteration_order(processed_paths):
        embedded_ch = find_embedded_node_channel(doc, scope, child_path)
        if not embedded_ch:
            continue
        child_events = RUN.emitted_by_scope.get(child_path, [])
        if child_events.is_empty():
            continue
        emb_handlers = sort_by_order_then_key( find_handlers_for_channel(doc, scope, embedded_ch) )
        for ev in child_events:
            for h in emb_handlers:
                res = execute_handler(h, context_for(scope, ev, scope_fifo))
                RUN.total_gas += res.gas_consumed
                for p in res.patches:
                    if boundary_violation(doc, scope, p):
                        ENTER_TERMINAL_TERMINATION(doc, scope, "Boundary violation at " + p.path)
                    doc = APPLY_PATCH_WITH_CASCADE(doc, origin_scope=scope, patch=p)
                for t in res.triggered_events:
                    EMIT_TO_SCOPE(scope, t)

    if RUN.terminated_scopes.get(scope, false):
        return (doc, RUN.emitted_by_scope[scope])

    # PHASE 5 — Drain THIS scope’s Triggered FIFO (if channel exists)
    if has_triggered_event_channel(doc, scope):
        doc = DRAIN_TRIGGERED_QUEUE(doc, scope)  # drains once

    return (doc, RUN.emitted_by_scope[scope])

```

**Notes.**

- **Balloon cut-off.** If a parent removes a child mid-run, that child’s current handler finishes; no further work (no extra handlers, no FIFO drain) occurs for that scope in this run. Already recorded emissions are preserved and will be bridged to the parent (including *Document Processing Terminated* on fatal). Re-adding the same path later in the run does not schedule it again (“no resurrection” within a run).
- **Direct Write** is defined in §19.1.

### Emit & record helpers

```
# Records a Triggered node as emitted by 'scope' and enqueues it for local delivery.
# - Appends to RUN.emitted_by_scope[scope] (for parent bridging)
# - Enqueues to RUN.fifo_by_scope[scope]
# - If scope is root, also appends to RUN.root_events (returned even if handled)
function EMIT_TO_SCOPE(scope, node):
    bucket = ensure_bucket(RUN.emitted_by_scope, scope)
    fifo   = ensure_fifo(RUN.fifo_by_scope, scope)
    bucket.append(node)
    fifo.enqueue(node)
    if scope == "/":
        RUN.root_events.append(node)

# Records a lifecycle node for bridging ONLY (no local Triggered delivery).
function RECORD_BRIDGEABLE(scope, node):
    bucket = ensure_bucket(RUN.emitted_by_scope, scope)
    bucket.append(node)
    if scope == "/":
        RUN.root_events.append(node)

```

### Apply patch + immediate bottom-up cascades

```
function APPLY_PATCH_WITH_CASCADE(doc, origin_scope, patch):
    before = snapshot_at(doc, patch.path)   # before mutation
    doc    = apply_json_patch(doc, patch)   # §21.2 (absolute pointers, array bounds, auto-materialization)
    after  = snapshot_at(doc, patch.path)   # after mutation (null for remove)

    for scope in ancestors_including_self_up_to_root(origin_scope):   # e.g., /a/b → [/a/b, /a, /]
        channels = sort_by_order_then_key(
                     filter_subtree_matches_by_absolute_path(
                        find_document_update_channels(doc, scope),
                        changed_abs_path = patch.path))

        if channels.is_empty():
            continue

        # One immutable payload object per scope (same for all handlers in S)
        event = make_document_update_event(scope, patch.op, patch.path, before, after)  # Appendix A “Document Update”

        fifo = ensure_fifo(RUN.fifo_by_scope, scope)
        for ch in channels:
            handlers = sort_by_order_then_key( find_handlers_for_channel(doc, scope, ch) )
            for h in handlers:
                res = execute_handler(h, context_for(scope, event, fifo))
                RUN.total_gas += res.gas_consumed
                for nested in res.patches:
                    if boundary_violation(doc, scope, nested):  # includes reserved-key write-protection
                        ENTER_TERMINAL_TERMINATION(doc, scope, "Boundary violation at " + nested.path)
                    doc = APPLY_PATCH_WITH_CASCADE(doc, origin_scope=scope, patch=nested)
                for emitted in res.triggered_events:
                    EMIT_TO_SCOPE(scope, emitted)

    return doc

```

### Drain a scope’s Triggered FIFO

```
function DRAIN_TRIGGERED_QUEUE(doc, scope):
    trig_ch = get_triggered_event_channel(doc, scope)
    if not trig_ch:
        return doc

    fifo = ensure_fifo(RUN.fifo_by_scope, scope)
    while not fifo.is_empty():
        ev = fifo.dequeue()
        handlers = sort_by_order_then_key( find_handlers_for_channel(doc, scope, trig_ch) )
        for h in handlers:
            res = execute_handler(h, context_for(scope, ev, fifo))
            RUN.total_gas += res.gas_consumed
            for p in res.patches:
                if boundary_violation(doc, scope, p):
                    ENTER_TERMINAL_TERMINATION(doc, scope, "Boundary violation at " + p.path)
                doc = APPLY_PATCH_WITH_CASCADE(doc, origin_scope=scope, patch=p)
            for t in res.triggered_events:
                EMIT_TO_SCOPE(scope, t)
    return doc

```

### Deliver Lifecycle at a scope

```
function DELIVER_LIFECYCLE(doc, scope, lifecycle_node):
    # Record lifecycle node for bridging (always)
    RECORD_BRIDGEABLE(scope, lifecycle_node)

    life_channels = sort_by_order_then_key( find_lifecycle_channels(doc, scope) )
    fifo = ensure_fifo(RUN.fifo_by_scope, scope)
    for ch in life_channels:
        handlers = sort_by_order_then_key( find_handlers_for_channel(doc, scope, ch) )
        for h in handlers:
            res = execute_handler(h, context_for(scope, lifecycle_node, fifo))
            RUN.total_gas += res.gas_consumed
            for p in res.patches:
                if boundary_violation(doc, scope, p):
                    ENTER_TERMINAL_TERMINATION(doc, scope, "Boundary violation at " + p.path)
                doc = APPLY_PATCH_WITH_CASCADE(doc, origin_scope=scope, patch=p)
            for t in res.triggered_events:
                EMIT_TO_SCOPE(scope, t)
    return doc

```

* * *

## 19.1 Direct Writes (normative)

A **Direct Write** is a processor mutation that **does not** produce a Document Update cascade and does **not** schedule any cascade work. It is used only for:

- Writing the *Processing Terminated Marker* (Appendix A) at a scope on termination (§22).
- **Creating** a *Channel Event Checkpoint* (Appendix A) at a scope **lazily** when an external channel is first evaluated and no checkpoint exists (§23).
- **Updating** the checkpoint’s `lastEvents[channelKey]` after successful external channel processing (§23).

Direct Writes are visible to subsequent logic in the same run and persist in `new_doc`. They appear only under reserved keys (e.g., `/…/contracts/terminated`, `/…/contracts/checkpoint`). Handlers/channels cannot perform Direct Writes; they are processor-internal (§15, §23).

* * *

## 20\. Initialization (first-run)

On the first processing of a scope:

1. **Compute** the pre-init BlueId of the scope’s subtree.
2. **Publish** *Document Processing Initiated* (Appendix A) via the scope’s Lifecycle Event Channel (root also records it in `triggered_events`).
3. **Add** *Processing Initialized Marker* (Appendix A) under `contracts/initialized` (**this patch MUST cause a Document Update cascade**).

**Reserved keys** are listed in §15; incompatible presence is fatal (§22.2).

**No eager checkpoint creation.** The processor MUST NOT create `contracts/checkpoint` during initialization solely to satisfy presence. The checkpoint is created **lazily** per §23 when an external channel is first evaluated at this scope.

*(Validated by: T3, T15.)*

* * *

## 21\. Document Updates & JSON Patch semantics

### 21.1 Document Update cascades (immediate & bottom-up)

One patch → a cascade of deliveries. When a handler’s patch applies successfully, the processor:

- captures `before`/`after` snapshots at the patch’s absolute path,
- delivers exactly one *Document Update* (Appendix A) per scope along the chain: origin → each ancestor → root, in that order.

At each scope **S**:

- **Matching** uses the patch’s **absolute** changed path vs `ABS(S, P)` with subtree semantics (equal or descendant).
- **Delivered event has uniform content within S.** All handlers at `S` see the same `op` and the same `before`/`after`snapshots; the `path` is **scope-relative** to `S`. **Contracts MUST treat this event as read-only** (§15).
- **Handlers may** patch/emit/`consumeGas`/`terminate`.
- **No drain during cascades (normative).** Triggered emissions produced during a cascade are enqueued to `S`’s persistent FIFO and MUST NOT be delivered during the cascade. `S`’s FIFO is drained once, at the end of `S`’s `_PROCESS` (Phase 5 in §19).

*(Validated by: T4, T5, T14, T17.)*

### 21.2 JSON Patch semantics (normative)

Handlers emit *Json Patch Entry* objects (Appendix A). The processor applies each patch immediately to the shared document, then performs the Document Update cascade (§21.1). Blue supports a practical, deterministic subset of RFC 6902:

- **Supported operations:** `add`, `replace` (upsert semantics), `remove`.  
  Other ops (`move`, `copy`, `test`, …) → deterministic runtime fatal.

**Pointer evaluation.**

- Patch path values are **absolute JSON Pointers** (must begin with `/`); segments are interpreted literally.
- Numeric segments are allowed (arrays).
- `-` is accepted (array append).
- Escaped tokens (e.g., `~1`) are treated verbatim (per JSON Pointer).
- Pointers resolve against the current document state, after earlier patches in the same run.

**Root-document target forbidden (normative).**  
The target path MUST NOT be `"/"` (document root). Replacing/removing the entire document is forbidden.

**Object auto-materialization.**  
Missing intermediate objects are materialized as empty objects before applying the patch. Auto-created containers are part of the same patch; the *Document Update* describes the final requested path.

**Object targets.**  
`add` inserts a new member or replaces an existing member.  
`replace` behaves as upsert.  
`remove` deletes a member. Removing a non-existent member → deterministic runtime fatal.

**Array targets.**  
Segments may be numeric indices or `-`.  
`add` with index inserts at that position (shifting).  
`add` with `-` appends.  
`replace` overwrites element at index (index must exist).  
`remove` deletes the element at index (shifts left; index must exist).  
Out-of-range indices (for `replace`/`remove`) → deterministic runtime fatal.

**Boundary enforcement.**  
Every patch is validated against embedded-scope isolation (§16). Any attempt to modify another scope’s interior → deterministic runtime fatal. A parent may add/replace/remove a child root (e.g., `/payment`), but MUST NOT reach inside it (e.g., `/payment/amount`). A child MAY NOT target its own scope root **S** (self-root mutation forbidden).

These rules let authors grow deep structures and manipulate arrays predictably while preserving the one-patch → one-cascade guarantee and determinism.

### 21.3 Helper notation (normative)

`ABS(S, P)` — the absolute JSON Pointer for a channel path `P` declared at scope `S` (normalized concatenation of `S` and `P`).  
`relativize_pointer(S, ABS)` — the relative pointer from scope `S`’s root to `ABS` (returns `"/"` when `ABS == S`).  
`relativize_snapshot(S, node)` — a read-only view or clone of `node` as observed at `S`. Nodes carry no path context; the relative path provides the location for handlers.

* * *

## 22\. Failure & termination semantics

### 22.1 Capability failure (must-understand)

If the document contains a contract type the processor does not support, the processor MUST return a must-understand failure and MUST NOT mutate the document or emit lifecycle fatals. *(T9.)*

### 22.2 Termination (graceful or fatal)

A channel or handler may invoke `terminate(cause, reason?)`, and the processor may terminate a scope **fatally** for deterministic runtime errors (e.g., boundary violation).

When a scope **terminates** (either cause):

1. **Direct Write** `contracts/terminated` with a *Processing Terminated Marker* (Appendix A), setting:
- `cause: "graceful"` or `"fatal"`
- `reason: <optional Text>`
2. **Publish** *Document Processing Terminated* at that scope (Appendix A; Lifecycle Event Channel).
- Lifecycle is also **recorded for bridging** via `RECORD_BRIDGEABLE` (§19).
3. **Deactivate** the scope for the rest of the run: mark `RUN.terminated_scopes[scope]=true` and **drop** its FIFO; further patch/emit from that scope are **no-ops**.

**Escalation rules.**

- **Non-root fatal → scope-terminal only:** preserve already recorded emissions for parent bridging; continue elsewhere.
- **Root fatal → run-terminal:** additionally append *Document Processing Fatal Error* (Appendix A) to the root run result (outbox-only; never routed) and abort the run (`raise TERMINAL_FAILURE`).
- **Non-root graceful →** scope ends normally; parent may still bridge the scope’s emissions (including the *Terminated*lifecycle) if configured.

*(Validated by: T23–T30.)*

**Termination helpers (informative pseudocode).**

```
function ENTER_GRACEFUL_TERMINATION(doc, scope, reason?):
    DIRECT_WRITE(doc, scope + "/contracts/terminated",
                 make_terminated_marker(cause="graceful", reason=reason))
    term_ev = make_terminated_event(cause="graceful", reason=reason)  # Appendix A
    doc = DELIVER_LIFECYCLE(doc, scope, term_ev)
    RUN.terminated_scopes[scope] = true
    clear_fifo(RUN.fifo_by_scope, scope)
    if scope == "/":
        # Root graceful ends the run after current call stack unwinds
        raise TERMINAL_FAILURE_GRACEFUL
    return doc

function ENTER_TERMINAL_TERMINATION(doc, scope, reason):
    DIRECT_WRITE(doc, scope + "/contracts/terminated",
                 make_terminated_marker(cause="fatal", reason=reason))
    term_ev = make_terminated_event(cause="fatal", reason=reason)  # Appendix A
    doc = DELIVER_LIFECYCLE(doc, scope, term_ev)
    RUN.terminated_scopes[scope] = true
    clear_fifo(RUN.fifo_by_scope, scope)
    if scope == "/":
        # Outbox-only fatal lifecycle
        RUN.root_events.append(make_fatal_outbox(reason))  # Appendix A
        raise TERMINAL_FAILURE
    return doc

```

* * *

## 23\. Channel Event Checkpoints

**Purpose.** A *Channel Event Checkpoint* records, per **external channel key**, the **last processed event node** (entire node, not just an id), enabling idempotent and ordered processing for external sources. (Type in Appendix A.)

**Presence & lazy creation.**  
A scope may lack `contracts/checkpoint` until it first evaluates an **external** channel. When an external channel at scope **S** is evaluated and `contracts/checkpoint` is **absent**, the processor **MUST** `DIRECT_WRITE(S + "/contracts/checkpoint", make_empty_checkpoint())` **before** applying the newness test. From that point on, the scope has exactly one checkpoint. **Multiple checkpoints** in a scope are a deterministic runtime fatal.

**Gating rule (external channels only).**  
For a matched **external** channel at scope **S**:

- The processor consults the scope’s checkpoint to decide whether the incoming event is **newer** than `lastEvents[channelKey]` (deterministic policy).
- If **not newer**, the channel is **skipped** (no handlers run).
- If **newer** and the channel completes successfully, the processor **MUST** `DIRECT_WRITE` an update setting `lastEvents[channelKey]` to the **entire event node** (no Document Update is emitted).

**Not gated.** Processor-managed families (*Document Update, Triggered, Lifecycle, Embedded Node*) are never subject to checkpoint gating.

**Tamper resistance.** Reserved keys are write-protected (§15). Any handler attempt to patch `contracts/checkpoint` (or descendants) is a deterministic runtime fatal.

*(Validated by: T8, T18 (lazy creation), T19–T22.)*

* * *

## 24\. Gas accounting

### 24.1 Philosophy & unit (normative)

**Purpose.** Gas accounting prevents resource-exhaustion while remaining proportional to actual work.

**Unit.** Gas is an abstract deterministic unit. Processors MUST NOT base gas on wall-clock, CPU model, memory, or I/O. *(Informative calibration: profiles MAY document a human-readable mapping; conformance depends solely on the formulas below.)*

### 24.2 What is charged and when (normative)

Processors MUST add the following charges to `RUN.total_gas` at the exact points indicated. Charges apply in addition to any explicit `consumeGas(units)` from channels/handlers.

#### 24.2.1 Scope management

| Operation | Formula | Charge point |
| --- | --- | --- |
| Scope entry | `50 + 10 × depth` | On entry to `_PROCESS` for a scope (root depth=0) |
| Scope exit | `0` | On return from `_PROCESS` |
| Initialization (first run) | `1000` | When §20 initialization starts for a scope (covers initiated lifecycle + marker patch orchestration) |

`depth` = number of embedded edges from root (`/`→0; `/a/b`→2).

#### 24.2.2 Matching & routing

| Operation | Formula | Charge point |
| --- | --- | --- |
| Channel match attempt | `5` per channel tested | Each channel considered in §19 Phase 3 |
| Handler call overhead | `50` | Just before executing each handler |

#### 24.2.3 Patches & cascades

| Operation | Formula | Charge point |
| --- | --- | --- |
| Boundary check | `2` per patch | Before applying each patch |
| Patch: add/replace | `20 + ceil(bytes/100)` | After validation, before cascade |
| Patch: remove | `10` | After validation, before cascade |
| Cascade routing | `10` per participating scope | For each scope that receives the resulting Document Update |

`bytes` = UTF-8 length of the canonical JSON of the patch **val** after Part I §8.2 cleaning/normalization (RFC 8785).

#### 24.2.4 Event emission, bridging & draining

| Operation | Formula | Charge point |
| --- | --- | --- |
| Emit event | `20 + ceil(bytes/100)` | When `emitEvent(node)` succeeds (enqueued) |
| Bridge child → parent | `10` per bridged node | For each node delivered by Embedded Node Channel in §19 Phase 4 (Triggered or lifecycle) |
| Drain FIFO | `10` per dequeued event | For each event dequeued in §19 Phase 5 (before handler overhead) |

`bytes` = canonical JSON size of the emitted event node.

#### 24.2.5 Checkpoints & direct writes

| Operation | Formula | Charge point |
| --- | --- | --- |
| Checkpoint read | `0` | When a channel consults the scope checkpoint |
| Checkpoint update (Direct Write) | `20` | After a matching external channel completes successfully |
| Termination marker write (Direct Write) | `20` | When writing `contracts/terminated` (§22) |

Direct Write is defined in §19.1; it never triggers Document Update.

#### 24.2.6 Lifecycle & termination

| Operation | Formula | Charge point |
| --- | --- | --- |
| Lifecycle delivery | `30` | Per `DELIVER_LIFECYCLE` call (once per call, before any handlers) |
| Graceful termination overhead | `+0` | On `terminate(cause="graceful")` (marker + lifecycle already charged) |
| Fatal termination overhead | `+100` | On fatal termination (in addition to termination marker & lifecycle delivery) |
| Must-understand failure | `0` | §22.1 capability failure (pre-execution) |

Fatal termination total for the termination step: **150 gas** (= marker `20` + lifecycle `30` + fatal overhead `100`), plus any prior work already charged. Graceful termination step: **50 gas** (= marker `20` + lifecycle `30`).

### 24.3 Accumulation & determinism (normative)

`RUN.total_gas` MUST include:

- all processor charges from §24.2, and
- all explicit `consumeGas(units)` calls made by channels/handlers.

Gas MUST be computed solely from:

- operation counts (handlers run, events enqueued/dequeued/bridged, scopes in cascade),
- structural properties (scope depth, cascade length),
- measurable sizes (canonical JSON byte length).

Gas MUST NOT depend on:

- wall-clock time, CPU speed/architecture, memory pressure,
- network/IO latency, OS scheduler effects.

Given the same input document and event, all conforming processors MUST return the same `total_gas`.

### 24.4 Policies & overruns (normative)

**Accounting only.** Part II defines gas accounting, not enforcement. Gas is recorded to `RUN.total_gas` for transparency and diagnostics. Absent an active policy, the processor MUST NOT modify behavior or terminate due to gas usage; it simply continues and returns the measured total.

**Policy‐driven enforcement (external).** Budgets/limits MAY be defined by separate, document-authored policies (markers/contracts outside this Part). Such policies may specify what budgets exist, when/how measurements are compared to budgets, and the exact behavior on overrun (e.g., graceful/fatal termination per §22, skipping work, emitting signals, etc.). The processor MUST implement any present policy it claims to support exactly. If a policy type is present but unsupported, the processor MUST return a must-understand failure (§22.1).

**Determinism.** Policies MUST be deterministic and base decisions only on data available within the run (including the gas totals defined in §24), to preserve cross-implementation consistency.

### 24.5 Validation examples (informative)

- **Simple replace at root (~4 bytes)** — Matching (1×5) + handler overhead (50) + patch replace (20) + cascade (1×10) → **85 gas** (+ any handler `consumeGas`).
- **Deep scope entry (depth=10) first run, small patch** — Entry (50+100) + init (1000) + handler (50) + replace (~100B → 21) + cascade (2×10) → **1231 gas** (+ handler `consumeGas`).
- **2 KB replace across 5 scopes** — Replace (20+20) + cascade (5×10) + handler overhead (5×50) → **340 gas** (+ handler logic at each scope).
- **Emit 500×1 KB events** — Emissions (500×(20+10)) + drain (500×10) + handler overhead (500×50) → **45 000 gas**(+ handler `consumeGas`).

Sizes use canonical JSON per Part I §8.2.

### 24.6 Implementation hooks (informative)

Where to add charges in the algorithm (§19):

- `_PROCESS` entry: scope-entry charge.
- §20 init path: init charge; lifecycle delivery charge; if a checkpoint marker is created, do not charge gas for this creation.
- Phase 3: per channel match attempt; per handler overhead; per patch boundary check + op charge + cascade charge (count scopes actually receiving *Document Update*); per emitted event charge.
- Phase 4: per bridged node charge (before running handlers).
- Phase 5: per dequeued event charge (before handler overhead).
- §22 termination: termination marker Direct Write, lifecycle delivery, and fatal overhead (if fatal).

Bytes measurement: apply Part I §8.2 cleaning & RFC 8785 canonicalization; use UTF-8 length of the canonical JSON string; use `ceil(bytes/100)` in formulas.

* * *

## 25\. Processor vs Feeder (division of responsibility)

**Feeder (out of scope here)** — collects external events, orders/deduplicates them per the application’s policy (scheduler, vector clocks, chain logs…). The feeder may deliver stale/out-of-order items; *Channel Event Checkpoint* ensures scopes ignore stale external events.

**Processor (this Part)** — given `(document, event)`, executes exactly the rules in this Part (initialization, matching, cascades, bridging, FIFO drains, failures/termination) to produce `(new_doc, triggered_events, total_gas)`.

* * *

## 26\. Conformance checklist (normative)

A compliant processor MUST:

**Embedded traversal**

- Read *Process Embedded* paths dynamically, re-reading after each child, and process each existing child at most once per run in first-seen order (**no resurrection** within a run).
- Enforce the boundary rule (§16): the parent can add/replace/remove the child root, but MUST NOT patch strictly inside another active embedded subtree.
- Enforce **self-root mutation forbidden** and **root** `"/"` **target forbidden**.

**Contract capabilities**

- Enforce must-understand: any unsupported contract type → capability failure (no mutation, no lifecycle fatals).
- Use deterministic sorting for channels and handlers: `(order, key)` at every scope.
- Treat delivered payloads as immutable (§15).
- Enforce write-protection of reserved keys (§15).

**Initialization**

- On first run at a scope: publish *Document Processing Initiated* and add *Processing Initialized Marker* (patch → *Document Update*).
- Do **not** create a checkpoint at init time; create lazily per §23.

**Patch & cascade semantics**

- After every applied patch, emit exactly one *Document Update* per scope in the cascade: origin → ancestors → root.
- Match *Document Update* channels by absolute changed path vs `ABS(S,P)` with subtree semantics.
- Deliver a scope-relative payload; all handlers at a scope MUST see the same *Document Update* payload object for a given patch.
- Maintain a per-scope Triggered FIFO; never drain during cascades; drain once at the end of the scope’s `_PROCESS`(Phase 5).
- Record every emission at its scope; append root-scope emissions and root lifecycle items to the run’s `triggered_events`.

**JSON Patch subset**

- Implement supported ops: `add`, `replace` (upsert), `remove`. Reject other RFC 6902 ops.
- Pointers are absolute; support numeric indices and `-` for append.
- Do not allow the document root `"/"` as a patch target.
- Auto-materialize missing intermediate objects.
- Enforce array bounds and object existence (out-of-range / remove-missing → deterministic runtime fatal).

**Checkpoint presence & behavior**

- Exactly one *Channel Event Checkpoint* per scope **after** first external evaluation (Appendix A). Duplicate → runtime fatal.
- For external channels, gate using the checkpoint’s newness rule; skip stale/duplicate events.
- After successful channel processing, **Direct Write** `lastEvents[channelKey] = <entire event node>` (no *Document Update*).
- Processor-managed families are not gated.

**Failure/termination**

- On termination, **Direct Write** *Processing Terminated Marker* with cause (`fatal`/`graceful`), publish *Document Processing Terminated*, and deactivate the scope for the remainder of the run.
- On root fatal, also append *Document Processing Fatal Error* to the root run result and abort the run.

**Run result**

- Return `(new_doc, triggered_events, total_gas)` exactly as defined in §15/§19.

* * *

## 27\. Test vectors (normative, behavior-defining)

**T1 — Dynamic embedded list**  
Root declares paths: \[/a, /b\]. While processing /a, a root handler removes /b and adds /c.  
**Then:** After /a, the processor re-reads paths and visits /c; /b is skipped (no longer exists).

**T2 — Boundary enforcement**  
Root attempts `replace /a/x` while `/a` is an active embedded child.  
**Then:** Fatal termination at root; `contracts/terminated` is written with *Processing Terminated Marker* (cause: fatal), a *Document Processing Fatal Error* is appended to the root run result; run aborts.

**T3 — Initialization once**  
First run at /a writes *Processing Initialized Marker* (patch) and publishes *Document Processing Initiated*.  
**Then:** The patch triggers a *Document Update* cascade; lifecycle may trigger additional work at /a.

**T4 — Update cascades (absolute match & relative payload)**  
A handler at /a applies `replace /a/z/k`. Root has a *Document Update Channel* watching `/a/z`.  
**Then:** At scope /a, payload `path="/z/k"`; at root, `"/a/z/k"`. Matching uses absolute paths; each scope sees the same payload content relative to itself.

**T5 — Cascade emissions are enqueued (not delivered)**  
Patch at /a/b causes a *Document Update* handler at /a to emit E.  
**Then:** E is recorded under /a and enqueued; it is delivered later in /a’s Phase 5 (not during the cascade).

**T6 — Triggered FIFO (deterministic order)**  
A handler at /a emits E1, E2. /a has a *Triggered Event Channel*.  
**Then:** /a drains FIFO (E1 then E2); further emissions during drain append to the tail and are processed deterministically.

**T7 — Bridging child emissions**  
Child /x emits events during its run. Parent has an *Embedded Node Channel* for /x.  
**Then:** After parent completes external handling, it bridges /x’s emissions (patches cascade), then drains the parent FIFO once.

**T8 — Checkpoint gating (external only)**  
A scope checkpoint exists. Two external channels match the event; one is stale, one is new.  
**Then:** The stale one is skipped; the new one runs; checkpoint updates via Direct Write.

**T9 — Capability failure (must-understand)**  
Document contains unknown contract type.  
**Then:** Processor returns must-understand failure; no patches; no lifecycle fatals.

**T10 — No-match**  
No channel matches anywhere.  
**Then:** Processor returns unchanged document, empty `triggered_events`, measured `total_gas`.

**T11 — Object auto-materialization**  
A handler applies `add /a/b/c {…}` where `/a` exists but `/a/b` does not.  
**Then:** Processor creates `/a/b` as `{}` then writes `/a/b/c`; one cascade runs.

**T12 — Array append and insert**  
Given `/a/items: ["x","y"]`.  
`add /a/items/- "z"` → `["x","y","z"]` (append).  
`add /a/items/1 "q"` → `["x","q","y"]` (insert).  
**Then:** Each patch triggers one cascade; out-of-range indices → fatal (T13).

**T13 — Deterministic runtime fatals (arrays & objects)**  
`replace /a/items/7 "z"` when length < 8 → fatal.  
`remove /a/missingKey` → fatal.  
**Then:** The failing scope gets `contracts/terminated` with *Processing Terminated Marker* (cause: fatal); root fatal only if the scope is root.

**T14 — Scope-relative payload**  
Patch at /a/b to replace `/a/b/x`.  
**Then:** At /a/b: `path="/x"`, at /a: `"/b/x"`, at root: `"/a/b/x"`. Same `op`, `before`, `after`; only path is scope-relative.

**T15 — Root lifecycle inclusion**  
First processing at root publishes *Document Processing Initiated*; later, a fatal elsewhere.  
**Then:** Root run result includes both lifecycle items in `triggered_events`.

**T16 — Local delivery depends on Triggered Channel presence**  
During a cascade at /a, a handler emits E. /a lacks a *Triggered Event Channel*.  
**Then:** E is recorded as emitted by /a (**bridgeable upward if parent configured**) but is **not delivered locally** at /a.

**T17 — Uniform event per scope**  
Multiple *Document Update Channels* at /a match the same patch.  
**Then:** All handlers at /a see the same *Document Update* payload object; differences arise only from channel/handler order/matching.

**T18 — Lazy checkpoint creation**  
A scope processes an **external** event and `contracts/checkpoint` is **absent**.  
**Then:** Before newness evaluation, the processor `DIRECT_WRITE`s an empty checkpoint at `/…/contracts/checkpoint`(`lastEvents = {}`). Newness evaluates as “no prior”; if handlers succeed, `lastEvents[channelKey]` becomes the **entire event node** by Direct Write. No *Document Update* is emitted for either write.

**T19 — Duplicate checkpoint**  
A scope contains two *Channel Event Checkpoint* markers.  
**Then:** Runtime fatal (only one is permitted).

**T20 — Stale external event is gated**  
`lastEvents.testEventsChannel` holds `E_old`; incoming `E_new` is not newer.  
**Then:** Channel is skipped; checkpoint unchanged.

**T21 — Checkpoint updated after success**  
A new external event on `testEventsChannel` is processed successfully.  
**Then:** `lastEvents.testEventsChannel = <entire event node>` via Direct Write (no *Document Update*).

**T22 — Multiple external channels at a scope**  
Two external channels match the same event and both are “newer.”  
**Then:** Both run in `(order, key)` order; each updates its own key in `lastEvents`.

**T23 — Self-root mutation is forbidden**  
While executing at /a, a contract attempts `remove /a` (or `replace /a`, `add /a`).  
**Then:** Fatal termination at /a.

**T24 — Root-document mutation is forbidden**  
Any contract attempts to target `"/"` with any op.  
**Then:** Fatal termination at the executing scope (root fatal ends the run).

**T25 — Balloon cut-off on child removal**  
While /b is being processed, a parent watcher removes /b.  
**Then:** The current /b handler finishes; no further work (no extra handlers, no drain) occurs for /b; already recorded /b emissions (including *Terminated* on fatal) are bridgeable; re-adding /b in this run does not schedule it again.

**T26 — Termination is final**  
A scope terminates gracefully; later in the same run a handler at that scope attempts to emit or patch.  
**Then:** No-op; scope is inactive for the remainder of the run.

**T27 — Child fatal does not escalate by default**  
/a terminates fatally.  
**Then:** /a is marked terminated; parent continues; child emissions (including *Terminated*) are bridgeable if parent configured *Embedded Node Channel*.

**T28 — Child graceful termination bridges lifecycle**  
/a terminates gracefully.  
**Then:** Parent may observe *Document Processing Terminated* via *Embedded Node Channel* if configured.

**T29 — Root graceful termination ends run**  
Root terminates gracefully.  
**Then:** Run ends; root outbox includes *Document Processing Terminated*.

**T30 — Root fatal termination ends run with fatal outbox**  
Root terminates fatally.  
**Then:** Run ends; root outbox includes *Document Processing Terminated* and *Document Processing Fatal Error*.

* * *

# Appendix A — Core Primitive & Collection Types

## A.1 Core Primitive

``` 
name: Text
description: >
  Core primitive scalar representing Unicode text (a JSON string).

  - Authoring & wrappers:
      detail: >
        Instances may be authored as scalar sugar (myField: "hello") or as the
        wrapped form (myField: { value: "hello" }). Canonical hashing uses the
        wrapped form (§8.2.2). This type itself does NOT declare a `value`
        field: `value` is the language wrapper for instance payloads (§2.1);
        putting `value` on the type would fix a concrete payload on the type
        object (a fixed-value invariant, §4.1), which is not intended.

  - Length semantics:
      detail: >
        minLength/maxLength count Unicode code points, not bytes and not UTF-16
        code units. A character outside the BMP (e.g., "𝄞") counts as 1.
        A CRLF pair counts as 2 code points ("\r" + "\n").

  - Regex dialect:
      detail: >
        `pattern` uses ECMA-262 syntax. Matching is not implicitly anchored; use
        ^…$ for whole-string matches.

  - Canonical JSON vs content:
      detail: >
        RFC 8785 canonicalization affects only JSON encoding (escapes, key
        ordering of parent objects). It NEVER changes the underlying code-point
        sequence. Equality (including `enum`) compares the parsed scalar value.

  - Unicode normalization:
      detail: >
        Processors MUST NOT normalize Text by default (no NFC/NFD folding).
        The exact code-point sequence is preserved. Any optional normalization
        may occur only via profile-specific preprocessing in `blue` (§7.1).

  - Case/locale:
      detail: >
        No case folding or locale-sensitive collation is implied in Part I.
        Perform such transforms explicitly during preprocessing (profile-
        dependent) or in higher-level contracts (Part II).

  - Empty string:
      detail: >
        The empty string "" is valid unless restricted by schema (e.g., minLength > 0).

  - Escapes & line breaks:
      detail: >
        JSON escapes (\uXXXX, \" \\ \n \r \t \b \f) are authoring/encoding
        details only; after parsing they contribute their code points to length
        and pattern checks.

  - Applicable schema:
      detail: >
        §5.5 string constraints: minLength, maxLength, pattern.
```

``` 
name: Integer
description: >
  Primitive numeric scalar for mathematical integers (ℤ).

  - Domain & arithmetic:
      detail: >
        Represents …, -2, -1, 0, 1, 2, … with arbitrary precision. There is no
        fixed bit width and no overflow. Arithmetic is exact; operations may
        fail only due to resource exhaustion.

  - Authoring & wrappers:
      detail: >
        Scalar sugar (x: 1) and wrapped form (x: { value: 1 }) are equivalent
        for authoring; canonical hashing uses the wrapped form (§8.2.2). The
        type does NOT declare `value` because that wrapper belongs to instances;
        declaring it on the type would fix a payload on the type object (§2.1, §4.1).

  - Canonical textual form:
      detail: >
        Optional leading "-" for negatives, followed by one or more decimal
        digits; no leading zeros except the single digit "0".

  - Schema & combination:
      detail: >
        §5.4 numeric constraints apply: minimum, maximum, exclusiveMinimum,
        exclusiveMaximum, multipleOf (> 0). If multiple `multipleOf` appear in a
        type chain, combine via least common multiple (LCM) as specified.

  - Equality & enums:
      detail: >
        Equality (incl. `enum`) compares the parsed scalar value in canonical JSON terms.
```

``` 
name: Double
description: >
  Primitive numeric scalar for floating-point real numbers.

  - Semantics:
      detail: >
        Computation aligns with IEEE 754 binary64 ("double precision").
        JSON permits only finite numbers; NaN and ±Infinity are invalid Blue nodes.

  - Authoring & wrappers:
      detail: >
        Scalar sugar (x: 1.25) and wrapped form (x: { value: 1.25 }) are
        equivalent for authoring; canonical hashing uses the wrapped form
        (§8.2.2). The type does NOT declare `value` (instance wrapper); putting
        it on the type would fix an instance payload (§2.1, §4.1).

  - Canonical textual form:
      detail: >
        RFC 8785 canonical-JSON number: base-10 notation, no leading zeros, no
        leading "+", optional exponent with lower-case "e", and no unnecessary
        trailing zeros or decimal point.

  - Precision & comparison:
      detail: >
        Processors should be aware of binary64 rounding when performing numeric
        operations. Schema comparisons use the numeric value (per §5.4). Equality
        and `enum` compare by canonical JSON value semantics, not by byte shape.

  - Schema:
      detail: >
        §5.4 numeric constraints apply: minimum, maximum, exclusiveMinimum,
        exclusiveMaximum, multipleOf (> 0).
```

``` 
name: Boolean
description: >
  Primitive scalar with exactly two values: true and false.

  - Authoring & wrappers:
      detail: >
        Scalar sugar (x: true) and wrapped form (x: { value: true }) are
        equivalent; canonical hashing uses the wrapped form (§8.2.2). The type
        does NOT declare `value` because it is an instance-level wrapper; adding
        it to the type would fix a payload on the type object (§2.1, §4.1).

  - Semantics:
      detail: >
        No truthiness beyond the two literals; only `true` and `false` are valid.

  - Equality & enums:
      detail: >
        Equality (incl. `enum`) compares the parsed boolean value; canonical JSON
        atoms are lower-case `true` / `false`.
```

``` 
name: Dictionary
description: >
  Object map from keys to values (a plain Blue object node).

  - Object shape (no properties field):
      detail: >
        Blue objects are plain maps of field → node; there is no `properties`
        key in the language (§2, §8.2.4). Dictionary is just an object with
        typing constraints for its keys/values.

  - Why not `value`/`items`:
      detail: >
        Dictionary is neither a scalar nor a list wrapper; `value`/`items` are
        instance wrappers for scalars/lists (§2.1). They do not apply to maps.

  - Key typing & serialization:
      detail: >
        Keys are serialized using the `keyType`'s canonical textual form. Because
        JSON member names are strings, non-Text keys are converted to strings
        (e.g., Integer 42 → "42", Double 1.0 → "1"). Distinct values that map to
        the same canonical string will collide; authors SHOULD choose a `keyType`
        that avoids ambiguity for their domain.

  - Defaults & compatibility:
      detail: >
        `keyType` and `valueType` are OPTIONAL. If `keyType` is omitted, it
        defaults to Text. If `valueType` is omitted, values may be any Blue node.
        Subtyping/compatibility must be preserved for both keyType and valueType
        when present (§4.2).

  - Ordering & equality:
      detail: >
        Key order is irrelevant for identity (RFC 8785 canonicalization).
        Equality compares by canonical JSON / BlueId rules.

  - Applicable schema:
      detail: >
        §5.3 object constraints: minFields, maxFields.
keyType:
  description: >
    OPTIONAL. Type for keys. Allowed: Text, Integer, Double, Boolean.
    Defaults to Text when omitted. Keys serialize via the keyType's canonical
    textual form.
valueType:
  description: >
    OPTIONAL. Type constraint for values. If omitted, values are unconstrained
    (any Blue node). If present, each value MUST be equal to or a subtype of
    valueType (§4.2).
```

## A.2 Collection Types

``` 
name: List
description: >
  Ordered collection (array) of elements.

  - Authoring & wrappers:
      detail: >
        Surface array (x: [a, b]) and wrapped form (x: { items: [a, b] }) are
        equivalent for authoring; canonical hashing uses the wrapped form
        (§8.2.2). The type itself does NOT declare `items`: `items` is the
        instance payload container. Declaring it on the type would install a
        concrete element array on the type object (a fixed invariant, §4.1) and
        conflate constraints with payload.

  - Order, multiplicity, identity:
      detail: >
        Order and multiplicity are preserved. List hashing is a domain-separated
        streaming fold over element BlueIds (§8.4). [A] ≠ A; [[] , A] ≠ [A].

  - Control item forms (reserved):
      detail: >
        Recognized only at the top level of items when the node's type is List:
        $previous (append anchor), $pos (positional overlay), and $empty (content
        placeholder). $pos is consumed before hashing; $previous is an optimization
        seed only; $empty is content (§12.2–§12.6).

  - Present-empty vs absent & null cleaning:
      detail: >
        Present empty list [] is preserved and hashes differently from null or
        absent. Nulls inside items are removed during cleaning (§8.2.1, §8.2.2).

  - Uniqueness:
      detail: >
        uniqueItems compares by element BlueId (§5.2), not by textual rendering.

  - Merge policy:
      detail: >
        If `mergePolicy` is omitted, assume "positional" (§12.3). "append-only"
        forbids changes to the inherited prefix (no $pos); "positional" allows
        $pos overlays within the inherited prefix. Refinements must remain type-
        compatible with inherited elements (§4.2, §12.5).

  - Applicable schema:
      detail: >
        §5.2 list constraints: minItems, maxItems, uniqueItems.
itemType:
  description: >
    OPTIONAL. Type applied to each element. If omitted, elements are not
    constrained by itemType (still subject to overlays and type chain). Subtype
    compatibility MUST be preserved across refinements (§4.2, §12.5).
mergePolicy:
  type: Text
  description: >
    OPTIONAL. Authoring/merge policy. If omitted, processors MUST assume
    "positional" (§12.3). Allowed values: "append-only", "positional".
  schema:
    enum: [append-only, positional]
```



# Appendix B — Contract & Processor Type Catalog

## B.1 Base Types

``` 
name: Contract
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Base for all contracts (channels, handlers, markers). Contracts live under a
  scope’s `contracts` map (keyed by Text). At runtime (Part II), contract
  processors execute deterministically and only through explicit operations;
  there are no implicit side effects.
order:
  type: Integer
  description: Deterministic sort key within a scope; missing ≡ 0.

```

``` 
name: Json Patch Entry
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Deterministic subset of RFC 6902 used by handlers to request document changes
  (Part II §21.2). NOTE: field is named `val` (not `value`) because `value`
  has special meaning as Blue’s scalar wrapper in Part I; using `val` prevents
  shape collisions with wrapper equivalence.
op:
  type: Text
  schema:
    required: true
    enum: [add, replace, remove]
path:
  type: Text
  description: >
    Absolute JSON Pointer within the document (must begin with "/").
    Runtime forbids targeting "/" (root) as a patch destination (Part II §21.2).
  schema:
    required: true
val:
  description: >
    Payload for `add` and `replace` (any Blue node). Omitted for `remove`.

```

## B.2 Contract Subtypes (abstract)

``` 
name: Channel
type: Contract
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Abstract base for event entry points within a scope. Channels decide whether
  an incoming event matches at this scope (Part II). External channels may also
  use the scope’s checkpoint to gate duplicates/stale events.
event:
  description: >
    Optional matcher payload used by the channel's processor to
    further restrict which incoming events it accepts at this scope.

```

``` 
name: Handler
type: Contract
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Abstract base for logic bound to exactly one channel (same scope). At runtime
  (Part II), a handler may: (1) apply patches (list of Json Patch Entry),
  (2) emit events (Blue nodes), (3) consume gas via `consumeGas(units: Integer)`,
  and (4) terminate (gracefully or fatally). No other effects are permitted.
channel:
  type: Text
  description: >
    The contracts-map key of the channel this handler binds to (same scope).
  schema:
    required: true
event:
  description: >
    Optional matcher payload used by the handler’s processor to further restrict
    which channelized events it will handle. IMPORTANT: the matching strategy
    (shape checks, field tests, etc.) is defined by the specific handler
    processor, not by this base schema.

```

``` 
name: Marker
type: Contract
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Abstract base for informational/policy contracts. Markers do not run logic;
  they carry state/policy enforced by the processor (Part II).

```

## B.3 Required Markers

``` 
name: Process Embedded
type: Marker
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Declares embedded child scopes beneath the current scope (Part II §16).
  The processor reads this list dynamically and re-reads after each child finishes.
paths:
  type: List
  itemType:
    type: Text
  description: >
    Scope-relative absolute pointers to child roots (strings beginning with "/",
    resolved against the current scope).
  schema:
    required: true
    uniqueItems: true

```

``` 
name: Processing Initialized Marker
type: Marker
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Recorded exactly once at a scope on first processing; stores the pre-init
  BlueId of the scope’s subtree (Part II §20). Writing this marker is a patch
  that triggers a Document Update cascade.
documentId:
  type: BlueId
  schema:
    required: true

```

``` 
name: Processing Terminated Marker
type: Marker
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Final state for a scope (either graceful or fatal). Once present, the scope
  becomes permanently inactive both for the remainder of the current run and in
  subsequent runs until explicitly replaced by a parent. Written as a Direct
  Write (no Document Update) when termination occurs (Part II §22).
cause:
  type: Text
  schema:
    required: true
    enum: [fatal, graceful]
reason:
  type: Text
  description: Optional human-readable explanation.

```

``` 
name: Channel Event Checkpoint
type: Marker
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Stores last-seen events per external channel at this scope to enable
  idempotent processing and ordering (Part II §23). Updates are Direct Writes
  (no Document Update).
lastEvents:
  name: Last Events
  description: >
    Map of channelKey (the contracts key of an external channel in this scope)
    to the entire last event node seen for that channel. Values are unconstrained
    (any Blue node) to allow channel-specific shapes.
  type: Dictionary
  keyType:
    type: Text
  # valueType intentionally omitted → any Blue node
  schema:
    required: true

```

## B.4 Processor‑Fed Channels

``` 
name: Document Update Channel
type: Channel
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Fires on successful patches with immediate bottom-up cascade
  (origin → ancestors → root). Matching uses subtree semantics against
  ABS(scope, path). Payload is the processor-emitted 'Document Update' event
  with scope-relative path (Part II §17, §21.1).
path:
  type: Text
  description: >
    Scope-relative absolute pointer (begins with "/") defining the watched
    subtree. Match iff the absolute changed path equals or is a descendant
    of ABS(scope, path).
  schema:
    required: true

```

``` 
name: Triggered Event Channel
type: Channel
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Delivers events previously enqueued by handlers into the scope’s FIFO.
  One drain per scope at the end of scope processing; never drains during
  cascades (Part II §17, §19).

```

``` 
name: Lifecycle Event Channel
type: Channel
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Delivers processor lifecycle notifications at this scope, e.g.,
  'Document Processing Initiated' and 'Document Processing Terminated'
  (Part II §17, §20, §22).

```

``` 
name: Embedded Node Channel
type: Channel
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Bridges a child scope’s emissions (including lifecycle nodes) into the parent
  after the child finishes (Part II §17, §19).
childPath:
  type: Text
  description: >
    Scope-relative absolute pointer to the child root to bridge.
  schema:
    required: true

```

## B.5 Processor‑Emitted Events

``` 
name: Document Update
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Emitted once per participating scope for each successful patch
  (bottom-up delivery). 'op' uses lower-case enum; 'path' is scope-relative
  for the receiving scope. 'before' and 'after' are snapshots (immutable views)
  (Part II §21.1).
op:
  type: Text
  schema:
    required: true
    enum: [add, replace, remove]
path:
  type: Text
  description: >
    Scope-relative pointer. "/" when the receiving scope’s root was affected.
  schema:
    required: true
before:
  description: Snapshot before the patch at this path (may be null).
after:
  description: Snapshot after the patch at this path (may be null; often null for remove).

```

``` 
name: Document Processing Initiated
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Published once at a scope on first processing (before writing the
  Processing Initialized Marker). At root, it is also included in the run’s
  'triggered_events' outbox (Part II §20).
documentId:
  type: BlueId
  schema:
    required: true

```

``` 
name: Document Processing Terminated
description: >
  Core type of Blue Language v1.0 (see https://language.blue).
  Published at the terminating scope when processing ends, either gracefully
  or fatally. Bridgeable to the parent via Embedded Node Channel if configured
  (Part II §22).
cause:
  type: Text
  schema:
    required: true
    enum: [fatal, graceful]
reason:
  type: Text
  description: Optional explanation.

```



====================
DOCUMENT: What are PayNotes?
CATEGORY: root
URL: https://language.blue/docs/paynotes
====================


# What are PayNotes?    

## **1\. Introduction & Overview**

The **PayNote** is a standard Blue document that represents a commitment of value from a **Payer** to a **Payee**, arbitrated and secured by a trusted **Guarantor**. It is not just a record of a transaction; it is a live, stateful, and programmable document that manages the entire lifecycle of a value exchange, from authorization and reservation to final capture or release.

By modeling payments and value commitments as verifiable, event-driven documents, the PayNote provides a level of transparency, security, and flexibility that is difficult to achieve with traditional APIs. It creates a "chain of evidence" for every step of the process, which can be independently verified by all participants.

## **2\. Core Concepts**

### **The Three Participants**

Every PayNote has three core roles, represented by channels in the document:

- **Payer**: The party providing the funds or value. The Payer initiates the PayNote and authorizes the use of their resources.
- **Payee**: The party intended to receive the funds or value. The Payee is often the one who triggers the final capture of funds.
- **Guarantor**: The trusted entity that controls the underlying funds or value (e.g., a bank, a credit card processor, a platform like a restaurant for a voucher). The Guarantor is the ultimate source of truth; it is the only participant with the authority to emit events that formally change the PayNote's state (e.g., `Funds Reserved`, `Funds Captured`).

### **The Event-Driven Lifecycle**

A PayNote evolves through a series of states. Participants (Payer or Payee) trigger **Requests** (e.g., `Capture Funds Requested`), and the Guarantor responds by emitting definitive, strongly-typed **Events** (e.g., `Funds Captured` or `Capture Declined`). This creates a decoupled, asynchronous flow where all state changes are explicit, auditable facts recorded on a timeline.

### **Child PayNotes: Subdividing Value**

A key feature of the PayNote is its ability to issue **Child PayNotes**. A Payer can reserve a larger sum in a parent PayNote (e.g., $500) and then grant an agent or another person the ability to issue smaller, independent PayNotes that draw from this reserved amount.

- When a Child PayNote is **captured**, the funds are transferred by the Guarantor, and the reserved amount on the parent is reduced.
- When a Child PayNote is **released** or expires, the value returns to the parent PayNote's reserved pool.

* * *

## **3\. The Base** `PayNote` **Definition**

This is the foundational Blue document `type` for a `PayNote`. It defines the core state fields, the participants, and the operations the Guarantor can use to emit status-changing events.

```
name: PayNote
description: A verifiable instrument for managing a commitment of value between a Payer, Payee, and Guarantor.

# --- Core State Fields ---
status:
  description: The current state of the PayNote (e.g., Pending, Approved, Reserved, Captured, Released, Rejected).
  type: Text
  value: Pending
currency:
  description: The ISO 4217 currency code for the transaction.
  type: Text
amount:
  description: The amounts associated with this PayNote.
  total:
    description: The maximum total value of this PayNote.
    type: Integer # Stored in minor units (e.g., cents)
  reserved:
    description: The amount currently reserved by the Guarantor.
    type: Integer
    value: 0
  captured:
    description: The amount that has been successfully captured.
    type: Integer
    value: 0
payNoteInitialStateDescription:
  summary:
    type: Text
    description: |
      General information about the PayNote. It should capture the most important
      information, most critical or non-standard elements should be covered here.
      Markdown format suggested.
  details:
    type: Text
    description: |
      Complete text description of the PayNote, with everything significant explained.
      Markdown format suggested.

# --- Participants & Contracts ---
contracts:
  payerChannel:
    type: MyOS Timeline
  payeeChannel:
    type: MyOS Timeline
  guarantorChannel:
    type: MyOS Timeline

  # --- Operations for the Guarantor to emit state-changing events ---
  # Each operation is restricted to the guarantorChannel. When called, it simply
  # triggers a corresponding event that all participants can observe.

  approvePayNote:
    type: Operation
    channel: guarantorChannel
    # ... implementation triggers 'PayNote Approved' event ...

  rejectPayNote:
    type: Operation
    channel: guarantorChannel
    # ... implementation triggers 'PayNote Rejected' event ...

  specifySettlementAmount:
    type: Operation
    channel: guarantorChannel
    request: { type: Integer } # The final amount
    # ... implementation triggers 'Settlement Amount Specified' event ...
  
  # ... and so on for all other Guarantor-driven events ...

```

* * *

## **4\. PayNote Events (The Guarantor's Vocabulary)**

These are the official, state-changing events that can **only be emitted by the Guarantor**.

### **Lifecycle Events**

```
name: PayNote Approved
type: Response
description: The Guarantor confirms the PayNote is valid and ready for further action.
---
name: PayNote Rejected
type: Response
description: The Guarantor denies the creation of the PayNote.
reason:
  type: Text
---
name: PayNote Cancelled
type: Response
description: The Guarantor confirms PayNote cancellation requested by Payer.
---
name: PayNote Cancellation Rejected
type: Response
description: The Guarantor denies PayNote cancellation requested by Payer.
reason:
  type: Text

```

### **Reservation Events**

```
name: Funds Reserved
type: Response
description: The Guarantor confirms that funds have been successfully reserved (held).
amountReserved:
  type: Integer
---
name: Reservation Declined
type: Response
description: The Guarantor could not reserve the requested funds.
reason:
  type: Text
---
name: Reservation Released
type: Response
description: The Guarantor confirms that previously reserved funds have been released.
amountReleased:
  type: Integer
---
name: Reservation Release Declined
type: Response
description: The Guarantor failed to release the reserved funds.
reason:
  type: Text

```

### **Capture Events**

```
name: Funds Captured
type: Response
description: The Guarantor confirms that funds have been successfully transferred to the Payee.
amountCaptured:
  type: Integer
---
name: Capture Declined
type: Response
description: The Guarantor declined the capture request before attempting the transfer.
reason:
  type: Text
---
name: Capture Failed
type: Response
description: The Guarantor attempted the transfer, but it failed for a technical reason.
reason:
  type: Text

```

### **Settlement & Child PayNote Events**

```
name: Settlement Amount Specified
type: Response
description: The Guarantor confirms the final settlement amount, which may differ from the initial total.
finalAmount:
  type: Integer
---
name: Settlement Amount Rejected
type: Response
description: The Guarantor rejects the proposed settlement amount.
reason:
  type: Text
---
name: Child PayNote Issued
type: Response
description: The Guarantor confirms that a new Child PayNote has been issued against this parent.
childPayNote:
  description: Content of the issued PayNote
---
name: Child PayNote Issuance Declined
type: Response
description: The Guarantor declined the request to issue a Child PayNote.
reason:
  type: Text

```

* * *

## **5\. PayNote Requests (The Participant's Vocabulary)**

These are the events that the **Payer** or **Payee** can trigger to request an action from the Guarantor.

```
name: Payee Assignment Requested
type: Request
description: The Payer requests to assign or change the Payee.
payeeEmail:
  type: Text
---
name: Reserve Funds Requested
type: Request
description: A participant requests the Guarantor to reserve (hold) a specific amount.
amount:
  type: Integer
---
name: Reservation Release Requested
type: Request
description: A participant requests the Guarantor to release previously held funds.
amount:
  type: Integer
---
name: Capture Funds Requested
type: Request
description: A participant (usually the Payee) requests the final transfer of funds.
amount:
  type: Integer
---
name: Reserve Funds and Capture Immediately Requested
type: Request
description: Equivalent of Reserve Funds Requested + Capture Funds Requested
amount:
  type: Integer
---
name: Issue Child PayNote Requested
type: Request
description: |
  The Payer requests to issue a new PayNote.
  The requested PayNote must have amount/total specified and in the same currency as this PayNote.
  If approved by Guarator, amount/total of this PayNote will be deducted by this value.
  If child PayNote is later cancelled, or captured value is smaller than the total, the funds will be added back to amount/total.
childPayNote:
  description: Complete PayNote
---
name: PayNote Cancellation Requested
type: Request
description: |
  The Payer requests to cancel PayNote that is Approved but not yet 
  Reserved or Captured.
childPayNote:
  description: Complete PayNote

```

* * *

## **6\. Use Cases & Complete Examples**

The true power of the `PayNote` lies in its flexibility. It can model everything from a simple, one-off bank transfer to a complex, AI-managed budget. The following examples demonstrate how different configurations of the base `PayNote` type can achieve vastly different outcomes.

* * *

### **Example 1: Simple, Self-Executing Bank Transfer**

**Scenario:** This is a standard, direct payment. Alice wants to send Bob $250.00, and the transfer should happen automatically as soon as the `PayNote` is created.

**Implementation:** This "fire-and-forget" payment is achieved with a `bootstrap` workflow that, upon initiation, immediately triggers a `Reserve Funds and Capture Immediately Requested` event.

```
name: Payment for Invoice Q3-SERVICES
type: PayNote

# --- Instance Data ---
currency: USD
amount:
  total: 25000 # $250.00

payNoteInitialStateDescription:
  summary: |
    This is a direct payment of **$250.00 USD** from Alice to Bob, securely processed by Citi Bank. The transfer will be executed automatically once this payment note is created.
  details: |
    This document authorizes a single, immediate transfer of funds.

    #### Participants
    * **Payer**: Alice (the sender)
    * **Payee**: Bob (the recipient)
    * **Guarantor**: Citi Bank (the financial institution handling the transfer)

    #### Operations
    There are no actions for any participant to take. This payment is fully automated.

    #### Scenarios
    * **Successful Transfer:** Upon creation, Citi Bank will automatically reserve and transfer the full $250.00 to Bob. No further steps are needed. A confirmation (`Funds Captured`) will be recorded here once complete.
    * **Failed Transfer:** If the transfer cannot be completed for any reason (e.g., insufficient funds), Citi Bank will record the failure here (`Reservation Declined` or `Capture Failed`), providing a clear and verifiable reason.

# --- Participants & Logic ---
contracts:
  bootstrap:
    type: Sequential Workflow
    event:
      type: Document Processing Initiated # Triggers on creation
    steps:
      - name: RequestImmediatePayment
        type: Trigger Event
        event:
          type: Reserve Funds and Capture Immediately Requested
          # Amount is intentionally omitted to default to the PayNote's total.

  payerChannel:
    type: MyOS Timeline # Bound to Alice's account
  payeeChannel:
    type: MyOS Timeline # Bound to Bob's account
  guarantorChannel:
    type: MyOS Timeline # Bound to Citi Bank's account

```

* * *

### **Example 2: Conditional Escrow for Shipment**

**Scenario:** A customer (Payer) is paying a merchant (Payee) €120.00 for goods that will be delivered by DHL. The payment should be held securely by the bank (Guarantor) and only released to the merchant *after* DHL confirms the package has been delivered.

**Implementation:** This creates a 4-party escrow. The `bootstrap` workflow only reserves the funds. A new custom operation, `shipmentConfirmed`, is added and restricted to the `shipmentCompanyChannel`. Only when DHL calls this operation will the `PayNote` then trigger the final `Capture Funds Requested`.

```
name: Escrow Payment for Shipment #SH-481516
type: PayNote

# --- Instance Data ---
currency: EUR
amount:
  total: 12000 # €120.00

payNoteInitialStateDescription:
  summary: |
    This is a protected payment of **€120.00 EUR**. The funds are held securely by your bank and will only be released to the Merchant after **DHL confirms successful delivery**.
  details: |
    This PayNote acts as a secure escrow to protect the Payer. The payment is guaranteed, but the final transfer is conditional on a confirmation from the shipping company.

    #### Participants
    * **Payer**: The Customer (sender of funds)
    * **Payee**: The Merchant (recipient of funds)
    * **Guarantor**: The Bank (holds the funds in escrow)
    * **Shipment Company**: DHL (provides delivery confirmation)

    #### Operations
    * **`shipmentConfirmed`** (Callable by: **Shipment Company - DHL**)
        * This action is performed by DHL to certify that the delivery is complete. This is the trigger that releases the payment to the Merchant.

    #### Scenarios
    1.  **Payment and Delivery:**
        * The Payer initiates the payment, and the Bank immediately reserves (holds) the €120.00.
        * DHL delivers the package to the Payer.
        * DHL then calls the `shipmentConfirmed` operation on this document.
        * This automatically authorizes the Bank to transfer the €120.00 to the Merchant. The process is complete.
    2.  **Shipment Issue:**
        * If the shipment is never confirmed by DHL, the funds remain reserved. The Payer can then initiate a cancellation to have the funds released back to their account.

# --- Participants & Logic ---
contracts:
  bootstrap:
    type: Sequential Workflow
    event: { type: Document Processing Initiated }
    steps:
      - name: RequestReservation
        type: Trigger Event
        event:
          type: Reserve Funds Requested
          amount: ${document('/amount/total')}
  
  shipmentConfirmed:
    type: Operation
    description: Must be called by the Shipment Company to confirm delivery and trigger payment capture.
    channel: shipmentCompanyChannel # Only DHL can call this.

  shipmentConfirmedImpl:
    type: Sequential Workflow Operation
    operation: shipmentConfirmed
    steps:
      - name: RequestFinalCapture
        type: Trigger Event
        event:
          type: Capture Funds Requested
          amount: ${document('/amount/total')}

  # Participants
  payerChannel:
    type: MyOS Timeline # Bound to Customer's account
  payeeChannel:
    type: MyOS Timeline # Bound to Merchant's account
  guarantorChannel:
    type: MyOS Timeline # Bound to the Bank's account
  shipmentCompanyChannel:
    type: MyOS Timeline # Bound to DHL's account

```

* * *

### **Example 3: AI Agent Issuing Child PayNotes**

**Scenario:** Alice wants to give her AI Shopping Agent a pre-approved budget of $1,000.00 to make multiple purchases on her behalf. The agent should be able to create smaller, independent payments for different vendors against this master budget.

**Implementation:** Alice creates a master `PayNote` and reserves the full amount. The document includes an `issueChildPayNote` operation that only she (or her agent, acting on her behalf) can call. This operation allows the agent to request the issuance of new, self-contained `Child PayNotes`.

```
name: AI Shopping Agent Managed Account
type: PayNote

# --- Instance Data ---
currency: USD
amount:
  total: 100000 # $1,000.00

payNoteInitialStateDescription:
  summary: |
    This document establishes a secure, pre-approved budget of **$1,000.00 USD** for an authorized AI Agent. The agent has been granted the ability to create and execute smaller, individual payments ("Child PayNotes") against this total budget.
  details: |
    This PayNote functions as a master account with a fixed, reserved limit. It does not make payments directly but authorizes the creation of smaller, linked payments.

    #### Participants
    * **Payer**: Alice (the owner of the funds)
    * **Guarantor**: Alice's Bank (the institution managing the funds)

    #### Operations
    * **`issueChildPayNote`** (Callable by: **Payer - Alice / Her Agent**)
        * This operation allows an authorized agent, acting on Alice's behalf, to request the issuance of a new, independent Child PayNote that draws funds from this master budget. The request must include the full details of the child payment, including its amount and payee.

    #### Scenarios
    1.  **Budget Setup:**
        * Alice creates this document, and her Bank immediately reserves the full $1,000.00. This action secures the total budget.
    2.  **Agent Initiates a Payment:**
        * The agent calls the `issueChildPayNote` operation.
        * The Bank validates that the requested amount is within the remaining budget.
        * If valid, the Bank creates the new Child PayNote and records its issuance here. The child payment then runs its own lifecycle (e.g., transferring funds to a vendor).
        * The available budget on this master document is automatically reduced.
    3.  **Budget Exceeded:**
        * If the agent attempts to issue a child payment that exceeds the available budget, the Bank will reject the request. The total spending can never exceed the initial $1,000.00.

# --- Participants & Logic ---
contracts:
  bootstrap:
    type: Sequential Workflow
    event: { type: Document Processing Initiated }
    steps:
      - name: ReserveFullBudget
        type: Trigger Event
        event:
          type: Reserve Funds Requested
          amount: ${document('/amount/total')}

  issueChildPayNote:
    type: Operation
    description: Allows the Payer (or their agent) to issue a new Child PayNote against the reserved funds.
    channel: payerChannel # Only Alice or her agent can call this.
    request:
      type: PayNote # Expects a complete PayNote document as input.

  issueChildPayNoteImpl:
    type: Sequential Workflow Operation
    operation: issueChildPayNote
    steps:
      - name: RequestChildIssuance
        type: Trigger Event
        event:
          type: Issue Child PayNote Requested
          childPayNote: ${event.message.request}

  # Participants
  payerChannel:
    type: MyOS Timeline # Bound to Alice's account
  guarantorChannel:
    type: MyOS Timeline # Bound to the Bank's account
  # payeeChannel is left unbound at this master level.

```



====================
DOCUMENT: Tutorial
CATEGORY: root
URL: https://language.blue/docs/introduction
====================


# Quick Tutorial - Introduction

Your AI assistant can already search, compare prices, and fill shopping carts. But you'd never give it your credit card because there's no way to set safe boundaries.

**Blue** unlocks safe autonomy at machine speed: it lets every participant - human or AI - specify **exactly** what's permitted, verify **who** initiated each action, and rely on trusted processors to enforce the rules.

## 1. The big idea (why Blue feels familiar)

When two people strike a deal we …

1. **Talk it through** in a shared [language](./language/introduction).
2. **Write it down** so everyone can see the terms and track how the deal is unfolding.
3. **Bring in trusted parties** - escrow, card network, courier - when the stakes feel risky.
4. **Act and observe** - we do our part while watching others do theirs.

Blue gives software the same four steps:

| Human habit         | Blue feature                                                                                                                                 |
|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
| Shared words        | **Full-sentence [vocabulary](./language/blue-id).** Forget stubby API nouns; describe intent the way you'd explain it to a person.           |
| Written contract    | **Self-contained [document](./contracts/introduction).** Facts *plus* deterministic rules any Blue processor can run.                        |
| Trusted 3rd parties | **Named participants** (Card Processor, DHL) who participate in the interaction by following the agreed [rules](./contracts/sequential-workflow).    |
| Verifiable activity | **Personal signed [timelines](./technical-white-paper-timelines).** Each party publishes their actions to their own timeline while observing others'. |

> **Participants** are actors named in the document (e.g. Alice, Bob, Card Processor).  
> **Processors** are whichever engines they choose - [MyOS](https://myos.blue) in the cloud, a CLI binary, or tomorrow's phone app - that read the document **plus** the timelines and, as long as they are compliant with [specification](./spec), converge independently. 

## 2. Example Story — *"I want a painless refund"*

### 2.1. The problem
Alice likes Bob's \$120 video-course but has never heard of him. She'll buy only if a refund is automatic for the first week. Bob is happy—provided everything runs itself.

### 2.2. Bob's Blue solution

:::info Order document  — participants: Alice, Bob
- If Alice accepts the course, Bob provides an embedded payment document
- If payment is refunded, course access is immediately revoked
  :::

:::info Card Processor-payment document — participants: Alice, Bob, Card Processor
- **Capture** $120 the moment Alice pays.
- For the next **7 days** Alice may post `RequestRefund`.
- On that event **Card Processor refunds the full amount** automatically—no Bob approval.
- Bob can still hit the API and refund on his own.
  :::

One link, one language, one contract - each party processes it on whichever engine they already trust.

### 2.3. Why Alice clicks **Buy**

* **She trusts Card Processor, not Bob.** She interacts directly with Card Processor and knows her refund right is enforced.
* **Crystal checkout copy.** Card Processor can show *"Instant refund within 7 days"* because it reads that promise straight from Bob's document.
* **Self-service.** Alice can click *Refund* **or** drop `RequestRefund` onto her timeline; either path works.

### 2.4. How the week unfolds

| Day | Timeline entry | Outcome (seen by every processor) |
|-----|----------------|-----------------------------------|
| 0   | `AuthorisePayment` (Alice) → `PaymentCaptured` (Card Processor) | Course unlocked; refund window ticking |
| 5   | `RequestRefund` (Alice)                                | Card Processor refunds \$120; Bob's dashboard logs `PaymentRefunded`; Order flips to **Cancelled** |
| 8   | *(none)*                                              | A late request is ignored - rule expired |

No emails, no screenshots, no chargebacks: the timelines settle it.

## Why This Works: Email + Blockchain, Without the Baggage

Think of Blue as combining the best of both worlds:
- **Like email**: You control your own timeline, share what you want, maintain privacy
- **Like blockchain**: Everyone can verify what happened, but without expensive global consensus
- **Unlike both**: Documents contain executable rules, not just static data

Alice's refund worked because her timeline proved her request, Card Processor's timeline proved the refund, and the document's rules guaranteed the outcome.

## 3. What this buys you

* **No new backbone** — run [MyOS](https://myos.blue), a [Java local app](https://github.com/bluecontract/blue-language-java), or a [Node.js Lambda](https://github.com/bluecontract/blue-js); everyone reaches the same verdict.
* **Infinite scalability** — every party processes only the timelines named in its contract, so there's **no global ledger to bottleneck or shard**. One deal or a million run with the same architecture.
* **No schema-sync calls** — reference any concept; the [BlueId](./language/blue-id) proves its meaning.
* **No invisible behaviour** — the only path to action is a [timeline](./technical-white-paper-timelines) everyone can inspect.
* **Plug-and-play trust** — drop in a bank, insurer, or sensor as a named participant; they enforce their slice and earn a fee without new code or schema mapping.

## 4. Why Blue ≠ blockchain / DLT

* **Personal timelines, not global consensus.** Like email, you control your own timeline and share selectively. Unlike blockchain, there's no expensive global ledger to maintain or expensive mining to secure.
* **Document-focused verification.** Processors only need timelines from participants named in a specific document—not the entire network's history.
* **Privacy by design.** Alice's `RequestRefund` is visible to Card Processor but not necessarily to anyone else—no public blocks exposing every transaction.
* **Efficiency without compromise.** All processors reach the same conclusion, but without mining, global consensus, or network-wide synchronization—just verified timelines from the specific parties involved.

## 5. Where this heads next

## 5. Where this heads next

### A. Programmable Money: PayNotes
Traditional payments are fire-and-forget. **[PayNotes](https://paynotes.blue)** are living contracts: conditional escrow tied to delivery confirmation, split payments across multiple recipients, time-bound refund windows, usage-based caps.

**The AI unlock:** Give your AI a PayNote with a $500 budget and rules—it can negotiate and pay suppliers autonomously, but **can't exceed the cap or violate terms**. You're not giving it your credit card; you're giving it a contract the Card Processor enforces. [Read more about PayNotes](./paynotes) and see [real examples](https://paynotes.blue/examples).

### B. The AI-to-AI Economy Is Here
The next revolution isn't AI assistants—it's **AI agents negotiating with other AI agents** at machine speed. **[MyOS for Business](https://myos.blue/for-business/shop)** makes this real today.

Your Sales AI negotiates with Customer AI while both humans sleep. They wake to **full audit trails**: who proposed what, who accepted, who paid. Because every step is a Blue document on verified timelines, you see exactly what your AI negotiated and why—**trackable, verifiable, no surprises**.

### C. Verifiable AI Plans
As agents handle complex tasks—drafting contracts, allocating budgets, coordinating supply chains—blind trust breaks down. The solution: **if every agent explains its plan as a Blue document, the rules become enforceable**.

The AI can't act outside its boundaries because the Blue document *is* the boundary. Before execution, the plan gets checked—by **humans or another AI**. [MyOS uses this pattern](https://myos.blue/ai-gateway): one agent prepares the plan, a second agent reviews it and warns the user if something looks off.

**Trust through structure:** The AI moves fast, but the Blue document ensures it stays in bounds.

### D. Composability: Trust Infrastructure as Lego Bricks
Because Blue documents are **content-addressed by meaning** ([BlueId](./language/blue-id)), proven patterns become reusable. Found a great escrow pattern? **Reference its BlueId**. Need the same refund policy across products? **One definition, infinite uses**. Building on industry standards? **Import from [repo.blue](https://repo.blue)**.

As more organizations publish patterns, **complex multi-party deals assemble from verified components**—each already working in production elsewhere. The cost of trust infrastructure approaches zero.

## 6. Take-home sentence

> **Blue lets software speak in full sentences, wrap those sentences in a verifiable contract, and require every action to land on a signed timeline - so any processor you trust can enforce the deal exactly as written.**

Everything else - [MyOS dashboards](https://myos.blue), [SDKs](https://github.com/bluecontract/blue-language-java), [channel-binding recipes](./contracts/channel-binding) - exists to make that sentence trivial to ship.

### Next stops

* **[Language primer](./language/introduction)** — BlueIds, structural types, extending the web of meaning.
* **[Contract guide](./contracts/introduction)** — workflows, channels, coordination strategies.
* **[What is a PayNote?](./paynotes)** — delivery capture, usage billing, escrow layers.
* **[Blue Language Specification](./spec)** — post events, watch independent processors agree.


====================
DOCUMENT: Introduction to Blue Language
CATEGORY: language
URL: https://language.blue/docs/language/introduction
====================


# Introduction to Blue Language

## The Problem of Reinvention

How many times have you defined a `Person` class in your career?

```java
// Java
public class Person {
    private String name;
    private String surname;
    private int age;
    // And so on...
}
```

```typescript
// TypeScript
interface Person {
  name: string;
  surname: string;
  age: number;
  // And so on...
}
```

```python
# Python
class Person:
    def __init__(self, name, surname, age):
        self.name = name
        self.surname = surname
        self.age = age
        # And so on...
```

Every API, every service, every application reinvents these basic types. We have schemas, DTOs, POJOs, models, and interfaces—all representing the same concepts in slightly different ways.

Solutions like Schema.org tried to standardize these definitions, but they weren't flexible enough or easy enough to integrate with existing systems. The problem persists: we lack a universal language for describing digital objects.

## A Universal Language for Digital Objects

We believe that clear communication requires a common language—not just for simple API requests and responses, but for complex digital relationships, contracts, and behaviors. That's why Blue is the **B**asic **L**anguage that **U**nifies **E**xperience.

Blue lets you define an object once and use it everywhere, across systems, programming languages, and organizations.

## The Core Idea: Content-Addressable Types

The key innovation in Blue is that every document has a unique content hash called a **blueId**. This is essentially a single word that uniquely represents specific content.

You can try it yourself at [web.blue/blue-id-calculator](https://web.blue/blue-id-calculator)

Let's start with a simple type definition:

```yaml
name: Simple Amount
amount:
  type: Double
currency:
  type: Text
```

This has the blueId `AR7X5ws8eHh2o6M3QczGVQmJ5pP3685vGj2PzYQZDLTS` - a unique fingerprint of its content.

Now we can define another type that uses this one:

```yaml
name: Person
age:
  type: Integer
spent:
  type:
    blueId: AR7X5ws8eHh2o6M3QczGVQmJ5pP3685vGj2PzYQZDLTS # Simple Amount
```

This `Person` type has blueId `6E93TBwTYYZ3zaWQhryCWz4rnJgGisaDgPrU8RnVLWuC`.

## Creating Instances with Types

Now we can create an actual person:

```yaml
name: Alice
type:
  blueId: 6E93TBwTYYZ3zaWQhryCWz4rnJgGisaDgPrU8RnVLWuC # Person
age: 25
spent:
  amount: 27.15
  currency: USD
```

This instance has blueId `9UpJ6SdjhX7MFXBKywH5ps5npUWAX76YnEDakrjPJ9LU`.

## The Magic of Meaning-Based Identity

Here's where it gets interesting. You'll get the _exact same blueId_ if you expand all the type references:

```yaml
name: Alice
type:
  name: Person
  age:
    type: Integer
  spent:
    type:
      name: Simple Amount
      amount:
        type: Double
      currency:
        type: Text
age: 25
spent:
  amount: 27.15
  currency: USD
```

This produces the same blueId: `9UpJ6SdjhX7MFXBKywH5ps5npUWAX76YnEDakrjPJ9LU`

This is a core principle of Blue: **documents with the same meaning have the same blueId, regardless of how they're represented.**

## Simplifying with Names

For convenience, you can use type names directly with the `blue` directive:

```yaml
blue: https://language.blue/simple.yaml
name: Alice
type: Person
age: 25
spent:
  amount: 27.15
  currency: USD
```

This tells processors where to find the type definitions. Throughout this documentation, we'll use this convention to keep examples clean and focused.

## Extension and Type Safety

In Blue, every document can be a type for another document:

```yaml
name: Alice Extended
type: Alice
smiling: true
friends:
  type: List
  itemType: Person
```

Every `Alice Extended` must be an `Alice`. If fields like `age` or `spent` were already set in `Alice`, they cannot be overwritten—you can only add new attributes or complete those that weren't specified earlier.

This creates a strong type system without requiring a centralized registry of types.

## Language Integration

Blue isn't just a document format—it integrates with your programming language of choice. For Java:

```java
@TypeBlueId("AR7X5ws8eHh2o6M3QczGVQmJ5pP3685vGj2PzYQZDLTS")
@AllArgsConstructor
public class SimpleAmount {
    Double amount;
    String currency;
}

@TypeBlueId("6E93TBwTYYZ3zaWQhryCWz4rnJgGisaDgPrU8RnVLWuC")
@AllArgsConstructor
public class Person {
    String name;
    Integer age;
    SimpleAmount spent;
}

// Creating and using a Blue document
Person alice = new Person("Alice", 25, new SimpleAmount(27.15, "USD"));
Blue blue = new Blue();

// Calculate the blueId
String blueId = blue.calculateBlueId(alice);
assert blueId.equals("9UpJ6SdjhX7MFXBKywH5ps5npUWAX76YnEDakrjPJ9LU");

// Convert to YAML
String yaml = blue.objectToYaml(alice);
```

The calculated blueId will match the one from our YAML examples, and the generated YAML will be semantically equivalent.

## Reusing Existing Types

Rather than defining all types yourself, you can import libraries from [repo.blue](https://repo.blue) and other sources. This gives you immediate access to standard types like `Person`, `Address`, `Payment`, and more specialized industry-specific types.

For popular types, language-specific libraries are available:

```bash
# JavaScript/TypeScript
npm install @blue-repository/identity-types

# Java
<dependency>
  <groupId>blue.repository</groupId>
  <artifactId>identity-types</artifactId>
  <version>1.0.0</version>
</dependency>

# Python
pip install blue-repository-identity-types
```

This eliminates the need to reinvent common structures and ensures consistent definitions across systems and organizations.



====================
DOCUMENT: Type System and Extension
CATEGORY: language
URL: https://language.blue/docs/language/types
====================


# Type System and Extension

In the previous section, we introduced the basic concept of types in Blue. Now, let's explore the type system more deeply, seeing how it enables powerful inheritance and extension models while maintaining strict compatibility.

:::note
For clarity, most examples in this documentation omit the `blue:` directive that would typically be required in practice. Learn more about this directive in the [Blue Directive](./blue-directive) section.
:::

## Building Type Hierarchies

Blue's type system allows you to create sophisticated inheritance hierarchies where more specialized types extend general ones:

```yaml
name: Product
price:
  amount:
    type: Number
    contracts:
      amountValidator:
        type: Schema Validator
        minimum: 0
  currency:
    type: ISO-4217 Currency Code
sku:
  type: Text
```

The `contracts` section introduces an important Blue concept: contracts define special behaviors for documents or fields. Here, the Schema Validator contract enforces constraints on the `price.amount` field. You'll learn more about contracts in the [Contracts section](../contracts/introduction).

Note that all fields in Blue are optional by default. If you want to make a field required, you need to add that constraint explicitly:

```yaml
name: Electronics
type: Product
manufacturer:
  type: Text
  contracts:
    requiredFieldValidator:
      type: Schema Validator
      required: true # This field must be present
```

## Specialization with Constraints

Let's create more specialized product types by adding both fields and constraints:

```yaml
name: Smartphone
type: Electronics
screenSize:
  type: Number
  contracts:
    screenSizeValidator:
      type: Schema Validator
      minimum: 3.5
      maximum: 7.5
operatingSystem:
  type: Text
  contracts:
    osOptionsValidator:
      type: Schema Validator
      options: ['iOS', 'Android']
```

You can also define field types separately for reuse:

```yaml
name: Operating System
type: Text
contracts:
  osOptionsValidator:
    type: Schema Validator
    options: ['iOS', 'Android']
---
name: Smartphone
type: Electronics
screenSize:
  type: Number
  contracts:
    screenSizeValidator:
      type: Schema Validator
      minimum: 3.5
      maximum: 7.5
operatingSystem:
  type: Operating System
```

Both approaches achieve the same result, giving you flexibility in how you organize your types.

Let's continue building our hierarchy:

```yaml
name: iPhone
type: Smartphone
operatingSystem: iOS # Fixed value for all iPhones
model:
  type: Text
  contracts:
    modelValidator:
      type: Schema Validator
      required: true
```

## Inheriting and Adding Contracts

When you extend a type, you inherit all its contracts. To add additional constraints to fields, use new contracts with descriptive names:

```yaml
name: Premium Smartphone
type: Smartphone
price:
  amount:
    contracts:
      premiumPriceValidator:
        type: Schema Validator
        minimum: 599.99 # Additional constraint on price amount
```

When resolved, the `price.amount` field will have both the `amountValidator` from `Product` (requiring ≥ 0) and the `premiumPriceValidator` from `Premium Smartphone` (requiring minimum 599.99). Together, these effectively restrict price to be minimum 599.99 if present.

This ability to progressively add constraints is powerful for modeling domain-specific rules that build upon more general ones.

## Multi-Level Inheritance

Blue supports multi-level inheritance, allowing you to create deeply specialized types:

```yaml
name: iPhone 14
type: iPhone
model: '14' # Fixed value for all iPhone 14s
storage:
  type: Text
  contracts:
    storageOptionsValidator:
      type: Schema Validator
      options: ['128GB', '256GB', '512GB', '1TB']
```

When we create an instance:

```yaml
name: iPhone 14 128GB
type: iPhone 14
storage: '128GB'
price:
  amount: 799.99
  currency: USD
color: Blue
```

The document inherits all properties from the entire inheritance chain, including the fixed values `operatingSystem: iOS` from the `iPhone` type and `model: "14"` from the `iPhone 14` type.

## Type Enforcement Rules

The Blue type system enforces strict rules during inheritance:

1. **No Overriding Values**: If a parent type defines a field value, child types inherit exactly that value and cannot change it
2. **Type Compatibility**: If a parent type defines a field's type, child types must use the same or a compatible subtype
3. **Additive Properties Only**: Child types can add new properties but cannot remove inherited ones

For example, these would be invalid:

```yaml
# INVALID - trying to change inherited fixed value
name: InvalidiPhone
type: iPhone
operatingSystem: Android # Cannot override 'iOS' value from iPhone type
```

```yaml
# INVALID - trying to change the type of 'price.amount'
name: InvalidProduct
type: Product
price:
  amount:
    type: Text # Cannot override Number type from Product
```

These strict rules ensure that a fundamental principle holds true: **Every instance of a subtype is also a valid instance of its parent type**. This means every "iPhone 14" is an "iPhone", every "iPhone" is a "Smartphone", and so on up the type hierarchy.

This principle—known as the Liskov Substitution Principle in object-oriented design—is strictly enforced in Blue's type system.

## Working with Collections

Blue supports two primary collection types—lists and dictionaries—that let you build complex data structures:

```yaml
# A product bundle with a list of items
name: Starter Tech Bundle
bundlePrice:
  amount: 1299.99
  currency: USD
items:
  type: List
  itemType: Product
    - type: Smartphone
      name: iPhone 14
      model: "14"
      storage: "128GB"
      price:
        amount: 799.99
        currency: USD
    - type: Electronics
      name: Wireless Earbuds
      manufacturer: Apple
      price:
        amount: 249.99
        currency: USD
    - type: Electronics
      name: Wireless Charger
      manufacturer: Belkin
      price:
        amount: 49.99
        currency: USD
```

```yaml
# Inventory management with product counts
name: Store Inventory
stock:
  type: Dictionary
  keyType: Text
  valueType: Integer
  'iPhone 14 128GB Black': 23
  'iPhone 14 256GB Black': 15
  'iPhone 14 Pro 128GB Silver': 8
  'Samsung Galaxy S23': 12
```

These examples demonstrate how collections enable complex data modeling while maintaining Blue's strict typing:

- The `items` list can contain any `Product` subtype
- The `stock` dictionary maps product names to their inventory counts

## Programming Language Integration

Blue types map cleanly to object-oriented programming concepts:

```java
// Define classes mapped to Blue types
@TypeBlueId("Price-BlueId")
public class Price {
    private Double amount;
    private String currency;
}

@TypeBlueId("Product-BlueId")
public class Product {
    private String name;
    private String description;
    private Price price;
    private String sku;
}

// Specialization with enum for type-safe values
public enum OperatingSystem {
    IOS, ANDROID
}

@TypeBlueId("Smartphone-BlueId")
public class Smartphone extends Product {
    private Double screenSize;
    private OperatingSystem operatingSystem;
}

// Converting between YAML and Java objects
String phoneYaml = """
        name: iPhone 14
        type: Smartphone
        price:
          amount: 799.99
          currency: USD
        screenSize: 6.1
        operatingSystem: IOS
        """;

// Convert YAML to Java object
Blue blue = new Blue(nodeProvider);
Smartphone phone = blue.yamlToObject(phoneYaml, Smartphone.class);

// Verify the conversion
assert phone.getName().equals("iPhone 14");
assert phone.getPrice().getAmount() == 799.99;
        assert phone.getPrice().getCurrency().equals("USD");
assert phone.getOperatingSystem() == OperatingSystem.IOS;
```

This bidirectional mapping ensures that Blue documents can be seamlessly converted to strongly-typed language objects and back again.

## Resolving Documents

When a Blue processor encounters a document with type references, it performs "resolution" to fully expand all types:

```yaml
# Before resolution
name: iPhone 14 Pro 256GB Black
type: iPhone 14
storage: "256GB"
color: Black
price:
  amount: 1099.99
  currency: USD

# After resolution (blueId: 9pq5Enj1LFqQskQShJwK9ERU8u2CLfnP2KbeaufYJyNm)
name: iPhone 14 Pro 256GB Black
type:
  name: iPhone 14
  type:
    name: iPhone
    type:
      name: Smartphone
      type:
        name: Electronics
        type:
          name: Product
          price:
            amount:
              type: Number
              contracts:
                amountValidator:
                  type: Schema Validator
                  minimum: 0
            currency:
              type: ISO-4217 Currency Code
          sku:
            type: Text
        manufacturer:
          type: Text
          contracts:
            requiredFieldValidator:
              type: Schema Validator
              required: true
      screenSize:
        type: Number
        contracts:
          screenSizeValidator:
            type: Schema Validator
            minimum: 3.5
            maximum: 7.5
      operatingSystem: iOS
    model: "14"
  storage:
    type: Text
    contracts:
      storageOptionsValidator:
        type: Schema Validator
        options: ["128GB", "256GB", "512GB", "1TB"]
storage: "256GB"
color: Black
price:
  amount: 1099.99
  currency: USD
```

Resolution creates a fully expanded document that contains all inherited properties and constraints. This ensures consistent validation and behavior across different processors.

You can perform resolution using the Blue library:

```java
// Parse the document
Blue blue = new Blue(nodeProvider);
Node node = blue.yamlToNode(yaml);

// Resolve types
Node resolved = blue.resolve(node);

// Convert back to YAML
String resolvedYaml = blue.nodeToYaml(resolved);
```

## Verifying Type Relationships

You can check if one document is a valid subtype of another:

```java
boolean isIPhone = blue.nodeMatchesType(myPhone, iPhone);  // true
boolean isSmartphone = blue.nodeMatchesType(myPhone, smartphone);  // true
boolean isProduct = blue.nodeMatchesType(myPhone, product);  // true
```

This enables type-safe operations in your applications when working with Blue documents.



====================
DOCUMENT: BlueId: Universal Content Addressing
CATEGORY: language
URL: https://language.blue/docs/language/blue-id
====================


# BlueId: Universal Content Addressing

In previous sections, we explored Blue's type system and document inheritance. Now, let's focus on the heart of Blue's referencing system: the BlueId—a unique identifier based on content rather than location.

## Beyond Traditional Identifiers

Traditional identifiers tell you where to find something:

- URLs point to server locations
- Database IDs reference rows in tables
- UUIDs are assigned to specific objects

In contrast, a BlueId represents what something is—its actual content and meaning. This creates a fundamentally different way of referencing digital objects:

```yaml
# Traditional approach
  {
    "id": "8f7d3c2e-1a5b-4f9c-8d7e-6b2c1a3f4e5d",
    "name": "iPhone 14",
    "price": 799.99
  }

# Blue approach
name: iPhone 14
price: 799.99
# This document's BlueId is HxvqaVPopn9w8CuVAp4YpVbiaEG25MmZY1UVZ4nA1hxV
```

The BlueId is not stored in the document—it's derived from the content itself.

## The Web of Meaning

Every BlueId represents a "word" in a vast language of digital content:

- Each document node has its own BlueId
- The same content always has the same BlueId
- BlueIds can reference content across systems and organizations
- Multiple documents can incorporate the same content via BlueId

This creates a web of interconnected content where references are based on meaning, not location:

<img
src="/img/blue/nodes.png"
alt="Nodes document view"
style={{ width: '60%', maxWidth: '400px' }}
/>

Traditional integration approaches require us to constantly reinvent and redefine our data models. It's like meeting new people and having to teach them the words for "apple," "orange," and "dog" every time. Blue offers a different approach—a shared semantic web where concepts are universally addressable.

Instead of trading isolated schemas and models, we're building a collective knowledge network. Any system can reference any node in this web without prior coordination. Your "Person" type can be immediately understood and used by others without custom mapping or translation.

This network is continuously enriched as more systems adopt Blue. Common patterns naturally emerge, and specialized domains can extend standard types without breaking compatibility. It's a living language that grows organically with use.

## BlueId Calculation

BlueId is a content hash calculated for every document using a bottom-up approach, starting from the deepest layer and propagating upwards. Technically, it's a base58-encoded SHA-256 hash of canonicalized JSON (following RFC8785 standards) for each node in the document. This ensures:

- Uniqueness: Any change in the content results in a new BlueId
- Integrity: Documents reference each other unambiguously
- Interconnectedness: Documents can be linked, forming a web of interconnected content

The specific normalization procedures and implementation details are available in the [Specification: BlueId Calculation](../spec#8-blueid-content-address) section.

## Walking the Blue Web

One of Blue's powerful capabilities is exploring this semantic web by retrieving referenced content as needed. Starting with any BlueId, you can "extend" outward to explore connected nodes.

For example, with our "Alice" document from earlier:

```yaml
# Starting point - just the BlueId
blueId: 9UpJ6SdjhX7MFXBKywH5ps5npUWAX76YnEDakrjPJ9LU
```

We can extend one level to see the immediate properties:

```java
Node node = blue.blueIdToNode("9UpJ6SdjhX7MFXBKywH5ps5npUWAX76YnEDakrjPJ9LU");
Node oneLevelExtended = blue.extend(node, PathLimits.withSinglePath("/*"));
```

Which produces:

```yaml
name: Alice
type:
  blueId: 6E93TBwTYYZ3zaWQhryCWz4rnJgGisaDgPrU8RnVLWuC # Person
age: 25
spent:
  blueId: HB8H2fLKAVZcEcqh19HiTehXjy25d22MMF6wAVvDzC4i # 27.15 USD
```

We can explore specific paths more deeply:

```java
Node spentExtended = blue.extend(node, PathLimits.withSinglePath("/spent/*"));
```

```yaml
name: Alice
type:
  blueId: 6E93TBwTYYZ3zaWQhryCWz4rnJgGisaDgPrU8RnVLWuC # Person
age: 25
spent:
  amount: 27.15
  currency: USD
```

## Practical Example: E-Commerce System

Let's explore a more complex real-world example showing how BlueIds connect multiple documents in an e-commerce system:

```yaml
# Customer order with BlueId references
name: Order #12345
date: 2023-06-15
customer:
  blueId: 9UpJ6SdjhX7MFXBKywH5ps5npUWAX76YnEDakrjPJ9LU # Alice
items:
  - product:
      blueId: HxvqaVPopn9w8CuVAp4YpVbiaEG25MmZY1UVZ4nA1hxV # iPhone 14
    quantity: 1
  - product:
      blueId: 8xYi5Svou5DVawB7CDEGuZitUGFChRYcJUF67bQ3NfXt # Wireless Earbuds
    quantity: 2
catalog:
  blueId: kL9mN0pQrStUvWxYz1A2B3C4D5E6F7G8H4FGh7j8 # Full product catalog
```

In this example, the order references:

- A customer profile (Alice)
- Individual products being ordered
- The entire product catalog (potentially thousands of products)

The order document remains lightweight because it only contains BlueId references to these larger entities. If we need customer details, we can extend just that path:

```java
Node orderWithCustomer = blue.extend(order, PathLimits.withSinglePath("/customer/**"));
```

```yaml
name: Order #12345
date: 2023-06-15
customer:
  name: Alice
  type:
    name: Person
    # Full Person type definition
  age: 25
  spent:
    amount: 27.15
    currency: USD
items:
  - product:
      blueId: HxvqaVPopn9w8CuVAp4YpVbiaEG25MmZY1UVZ4nA1hxV # iPhone 14
    quantity: 1
  - product:
      blueId: 8xYi5Svou5DVawB7CDEGuZitUGFChRYcJUF67bQ3NfXt # Wireless Earbuds
    quantity: 2
catalog:
  blueId: kL9mN0pQrStUvWxYz1A2B3C4D5E6F7G8H4FGh7j8 # Full product catalog
```

## Extension Risk Management

Extend carefully! Some BlueIds can represent enormous amounts of data:

- The `catalog` BlueId in our example might contain thousands of products, each with detailed specifications
- A BlueId representing Wikipedia would contain millions of interconnected articles
- A [Timeline](../technical-white-paper-timelines) of an active user could contain years of activity records

As in real life, you rarely need to download all knowledge—just the parts relevant to your current task. Blue's extension mechanism lets you precisely control how much data you retrieve.

## web.blue: The Blue Explorer

Web.blue is a website that helps you explore the Blue web. You can provide your own node providers—components that map BlueIds to documents—and visualize content in multiple ways:

As a document:

![Web.blue document view](/img/blue/webblue1.png)

Or as an interactive graph that you can traverse:

![Web.blue graph view](/img/blue/webblue2.png) ![Web.blue expanded graph](/img/blue/webblue3.png)

Web.blue demonstrates how BlueIds enable document exploration without prior knowledge of structure. The restaurant menu example loads gradually as content is retrieved from IPFS, showing how Blue documents can be distributed across decentralized storage systems.

This approach solves a fundamental web problem: content disappearing when URLs change. With BlueIds, you have the fingerprint of content, allowing you to retrieve it from any source that has it. The mapping between BlueIds and IPFS Content Identifiers follows specific transformation rules detailed in the [Java implementation](https://github.com/bluecontract/blue-language-java/blob/master/src/main/java/blue/language/provider/ipfs/BlueIdToCid.java) section.

## Why BlueId Matters

The BlueId approach provides several unique advantages:

1. **Semantic References**: References point to meaning, not just bytes
2. **Universal Vocabulary**: Every piece of content has a unique "word" that identifies it
3. **Location Independence**: Content can move between systems while references remain valid
4. **Natural Deduplication**: Identical content is automatically unified
5. **Verifiable Content**: BlueIds serve as built-in integrity checks

Compare this with other hash-based systems:

| System          | What it Hashes                               | Purpose                                    |
| --------------- | -------------------------------------------- | ------------------------------------------ |
| BlueId          | Semantic structure with meaning preservation | Content-addressing with semantic awareness |
| Git Hash        | File content + metadata                      | Version control                            |
| IPFS CID        | Raw bytes                                    | File storage                               |
| Blockchain Hash | Transaction data                             | Transaction verification                   |

The key distinction is that BlueId preserves meaning, not just structure or bytes.



====================
DOCUMENT: The Blue Directive
CATEGORY: language
URL: https://language.blue/docs/language/blue-directive
====================


# The Blue Directive

While Blue enforces a rigorous structure internally, it aims to be as user-friendly as possible for document authors. The `blue` directive bridges this gap by transforming human-friendly documents into proper Blue documents before processing begins.

## Preprocessing with the Blue Directive

The `blue` directive appears at the root level of a document and specifies transformations to apply during preprocessing:

```yaml
blue: Ticket Details v1.0
Ticket Number: ABC-12345
Departure Date: 2023-10-15
Seat: 14A
```

Preprocessing occurs before any other document processing, including BlueId calculation. This ensures that:

1. The original document can be user-friendly and flexible
2. The resulting document strictly follows Blue conventions
3. The BlueId is calculated only after preprocessing completes

The `blue` directive itself is removed during preprocessing, meaning it doesn't affect the document's BlueId or semantic content.

## Common Transformations

Let's explore the most useful transformations that can be included in the `blue` directive:

### Replace Inline Types with BlueIds

This transformation converts simple type names to their full BlueId references:

```yaml
blue:
  - type:
      blueId: 27B7fuxQCS1VAptiCPc2RMkKoutP5qxkh3uDxZ7dr6Eo
    mappings:
      Person: 8xYi5Svou5DVawB7CDEGuZitUGFChRYcJUF67bQ3NfXt
      Dog: G1pcQx2tq16z5yVqE9TGaCH5uCDAaMZ6uFts7d3NztYo

name: Alice Smith
type: Person
pet:
  type: Dog
  name: Rex
```

After preprocessing:

```yaml
name: Alice Smith
type:
  blueId: 8xYi5Svou5DVawB7CDEGuZitUGFChRYcJUF67bQ3NfXt
pet:
  type:
    blueId: G1pcQx2tq16z5yVqE9TGaCH5uCDAaMZ6uFts7d3NztYo
  name: Rex
```

This transformation (BlueId: `27B7fuxQCS1VAptiCPc2RMkKoutP5qxkh3uDxZ7dr6Eo`) allows you to write documents with simple type names that humans can understand.

### Infer Basic Types For Untyped Values

This transformation automatically determines types for primitive values:

```yaml
blue:
  - type:
      blueId: FGYuTXwaoSKfZmpTysLTLsb8WzSqf43384rKZDkXhxD4

x: 12
y: true
z: Hello
```

After preprocessing:

```yaml
x:
  type: Integer
  value: 12
y:
  type: Boolean
  value: true
z:
  type: Text
  value: Hello
```

This transformation (BlueId: `FGYuTXwaoSKfZmpTysLTLsb8WzSqf43384rKZDkXhxD4`) eliminates the need to explicitly specify types for every value.

### Convert Attribute Names To Camel Case

This transformation standardizes field names to programming-friendly format:

```yaml
blue:
  - type:
      blueId: GpMaofZmtLhEQbwkaYd3hm6Es1udEySM6svg8UoN1yEH

Seat No.: 156
Arrival Time: 15:25
```

After preprocessing:

```yaml
seatNo: 156
arrivalTime: 15:25
```

This transformation (BlueId: `GpMaofZmtLhEQbwkaYd3hm6Es1udEySM6svg8UoN1yEH`) makes documents both human-readable and programming-friendly.

### Normalize DateTime Values

This transformation converts date strings to structured DateTime objects:

```yaml
blue:
  - type:
      blueId: 93nhEGAmviA5Ey8wZ4ZfeHheAbzvxz473rG92zznvTps
    datetimePattern: yyyy-MM-dd HH:mm

arrival: 2025-03-27 15:25
```

After preprocessing:

```yaml
arrival:
  type: DateTime
  year: 2025
  month: 3
  day: 27
  hour: 15
  minute: 25
```

This transformation (BlueId: `93nhEGAmviA5Ey8wZ4ZfeHheAbzvxz473rG92zznvTps`) allows human-readable dates while ensuring proper structural representation.

### Map Attribute Names

This transformation translates field names between languages or formats:

```yaml
blue:
  - type:
      blueId: 27B7fuxQCS1VAptiCPc2RMkKoutP5qxkh3uDxZ7dr6Eo
    mappings:
      チケット整理番号: ticketSerial
      出発日時: departure
      座席番号: seatNo

チケット整理番号: HL-923554
出発日時: 2025-03-27 15:25
座席番号: 15
```

After preprocessing:

```yaml
ticketSerial: HL-923554
departure: 2025-03-27 15:25
seatNo: 15
```

This enables content creation in any language while maintaining consistent field names in the processed document.

## Default Blue Directive

Every Blue document has an implicit default `blue` directive if none is specified:

```yaml
blue:
  - type:
      blueId: 27B7fuxQCS1VAptiCPc2RMkKoutP5qxkh3uDxZ7dr6Eo
    mappings:
      Text: DLRQwz7MQeCrzjy9bohPNwtCxKEBbKaMK65KBrwjfG6K
      Double: 7pwXmXYCJtWnd348c2JQGBkm9C4renmZRwxbfaypsx5y
      Integer: 5WNMiV9Knz63B4dVY5JtMyh3FB4FSGqv7ceScvuapdE1
      Boolean: 4EzhSubEimSQD3zrYHRtobfPPWntUuhEz8YcdxHsi12u
      List: 6aehfNAxHLC1PHHoDr3tYtFH3RWNbiWdFancJ1bypXEY
      Dictionary: G7fBT9PSod1RfHLHkpafAGBDVAJMrMhAMY51ERcyXNrj
  - type:
      blueId: FGYuTXwaoSKfZmpTysLTLsb8WzSqf43384rKZDkXhxD4
```

This default directive (BlueId: `6sqUywMoBRyj9hgQxSu2nDPnqcyiSM7xu9AB9sN98YJK`) provides mappings for basic types and enables type inference.

## A Complete Example

Let's see what happens when we apply multiple transformations together:

```yaml
blue: Ticket Details v.1.51
Ticket Serial No.: HL-923554
Departure: 2025-03-27 15:25
Seat No.: 15
```

After all transformations:

```yaml
ticketSerial:
  type:
    blueId: DLRQwz7MQeCrzjy9bohPNwtCxKEBbKaMK65KBrwjfG6K
  value: HL-923554
departure:
  type:
    blueId: 4UevQMiUg5BjYe9sX26xf4QHgFoKPmf7JVAM9wk4bmzR
  year:
    type:
      blueId: 5WNMiV9Knz63B4dVY5JtMyh3FB4FSGqv7ceScvuapdE1
    value: 2025
  month:
    type:
      blueId: 5WNMiV9Knz63B4dVY5JtMyh3FB4FSGqv7ceScvuapdE1
    value: 3
  day:
    type:
      blueId: 5WNMiV9Knz63B4dVY5JtMyh3FB4FSGqv7ceScvuapdE1
    value: 27
  hour:
    type:
      blueId: 5WNMiV9Knz63B4dVY5JtMyh3FB4FSGqv7ceScvuapdE1
    value: 15
  minute:
    type:
      blueId: 5WNMiV9Knz63B4dVY5JtMyh3FB4FSGqv7ceScvuapdE1
    value: 25
seatNo:
  type:
    blueId: 5WNMiV9Knz63B4dVY5JtMyh3FB4FSGqv7ceScvuapdE1
  value: 15
```

The BlueId of this fully processed document is `2YUzCvdTQKB34MutfhsKHEEbdNqJL6FVPNvNcVVvBJDP`.

## The Power of Text Aliases

One of the most powerful aspects of the Blue directive is the ability to use text aliases for transformation sets:

```yaml
blue: Ticket Details v.1.51
Ticket Serial No.: HL-923554
Departure: 2025-03-27 15:25
Seat No.: 15
```

```yaml
blue: Ticket Details v.1.51 JP
チケット整理番号: HL-923554
出発日時: 2025-03-27 15:25
座席番号: 15
```

Despite being written in different languages, both documents preprocess to identical structures with the same BlueId: `2YUzCvdTQKB34MutfhsKHEEbdNqJL6FVPNvNcVVvBJDP`.

To make this work, you register aliases with your Blue processor:

```java
blue.addPreprocessingAliases(Map.of(
        "Ticket Details v.1.51", "5VEx7ee9to3Z56eVdkNzBfpHaZU9KD4moYttpS5h6bHt",
                                     "Ticket Details v.1.51 JP", "3wfkqtvZvgiff55ZhrtToK3sDcPHvKKY5DcZw68nor3i"
));
```

## URL References and Security

The `blue` directive can also reference a URL:

```yaml
blue: https://language.blue/simple.blue
name: Alice
type: Person
age: 25
```

For security reasons, URL fetching is disabled by default in most Blue processors. To enable it:

```java
blue.enablePreprocessingDirectivesFetchForUrls();
```

:::caution
Enabling URL fetching creates potential security vulnerabilities. Use registered aliases in production systems.
:::

## Creating Custom Transformations

Anyone can define custom transformations for specialized preprocessing needs. These transformations must be registered with processors that need to understand them.



====================
DOCUMENT: Introduction to Contracts
CATEGORY: contracts
URL: https://language.blue/docs/contracts/introduction
====================


# Introduction to Contracts

Blue contracts are the foundation of dynamic, event-driven behavior in Blue documents. While the Blue language itself provides structure and content addressing, contracts bring documents to life by defining how they respond to events and interact with the world.

## What Are Blue Contracts?

Blue contracts are components within documents that define:

1. **Event channels** that documents listen to
2. **Processing rules** determining how to handle events
3. **State transitions** that update document content
4. **Operations** that documents expose to the outside world

Unlike traditional smart contracts on blockchain platforms, Blue contracts don't require global consensus or a specific runtime environment. Instead, they focus on deterministic behavior that any compliant processor can verify independently.

## Core Philosophy

Blue contracts are built on four key principles:

- **Event-Driven Architecture**: Documents react to events from multiple sources
- **Deterministic Processing**: Given the same inputs, deterministic contracts produce identical results across all processors
- **Channel-First Design**: All interactions begin with channels
- **Composable Behavior**: Contracts combine to create sophisticated applications

## Core Components

### 1. Channels

Channels are the entry points for all events into a document:

```yaml
contracts:
  userTimeline:
    type: Timeline Channel
    account: alice@example.com
```

Every event that affects a document must enter through a channel. Channels can represent user timelines, document changes, REST API endpoints, or external systems.

Channels may also declare an optional `event` matcher that describes which incoming events they accept at this scope (coarse filtering before handlers run):

```yaml
contracts:
  userTimeline:
    type: Timeline Channel
    account: alice@example.com
    event:
      message:
        type: User Action
```

In this example, only timeline entries whose `message.type` is `User Action` will be delivered through `userTimeline`; other entries on the same underlying timeline are ignored by this document.

### 2. Handlers

Handlers process events from channels and execute business logic:

```yaml
contracts:
  paymentHandler:
    type: Sequential Workflow
    channel: paymentEvents
    steps:
      - name: Verify Amount
        # Step details...
```

The most common handler is the Sequential Workflow, which executes a series of steps in response to events.

### 3. Additional Contract Types

Beyond channels and handlers, Blue supports other specialized contracts:

```yaml
contracts:
  schemaValidator:
    type: Schema Validator
    amount:
      minimum: 0
      maximum: 1000000
```

These contracts serve specific purposes like validating document content, defining operations that others can invoke, or specifying how processors should handle the document. Each contract type focuses on a particular aspect of document behavior.

## Hierarchical Processing

Blue documents can contain nested nodes with their own contracts:

```yaml
name: Payment Agreement
status: pending
btcPayment:
  type: BTC Payment
  contracts:
    btcChannel:
    # BTC-specific contracts
contracts:
  statusChannel:
  # Parent document contracts
```

When processing such documents, the processor evaluates all contracts at all levels of the hierarchy, creating a composed system that respects all defined behaviors.

## Document Processing Mechanics

The Blue processing model follows a single-event pattern:

1. An event arrives at a document through a channel
2. The processor identifies workflows listening to that channel
3. Workflows execute their logic, potentially updating the document
4. If updates occur, Document Update Channels may trigger additional processing
5. Workflows may also emit new events, causing further processing
6. This cycle continues until the document reaches a stable state
7. The processor returns the final updated document and any triggered events

This approach ensures that all compliant processors will reach exactly the same final state, given the same inputs. The deterministic nature of this processing is fundamental to Blue's trust model - allowing independent verification without requiring global consensus.

A critical aspect of this model is maintaining consistent event ordering when processing events from multiple channels, and ensuring that all processors reference the same external reality. For example, when processors handle events from external systems like cryptocurrency transactions or sports results, they must agree on the canonical source of truth. Blue addresses these challenges through channel coordination patterns and specialization mechanisms, which we'll explore in detail in the "Channel-Based Event Model" section.

## Comparison to Traditional Smart Contracts

| Feature                | Blue Contracts                 | Blockchain Smart Contracts  |
| ---------------------- | ------------------------------ | --------------------------- |
| Execution Environment  | Any Blue processor             | Specific blockchain VM      |
| Consensus Mechanism    | Independent verification       | Global network consensus    |
| State Storage          | Document-centric               | Global ledger               |
| Integration Approach   | Multiple channels              | Limited oracle integrations |
| Privacy Model          | Flexible, document-specific    | Public by default           |
| Scripting Capabilities | JavaScript and more            | Platform-specific language  |
| Reality Model          | Multiple verified perspectives | Single global state         |

Blue's approach mirrors real-world reasoning - individuals independently observe events, apply shared rules, and reach compatible conclusions without central coordination. Unlike blockchain's enforced global consensus, Blue enables multiple valid perspectives while maintaining verification through clear rules and shared evidence. This provides flexibility without sacrificing trust, allowing systems to model reality more naturally.

## A Minimal Example

Here's a complete working example of a Blue document with contracts:

```yaml
name: Simple Counter
counter: 0
contracts:
  # Define a channel for events
  incrementChannel:
    type: Timeline Channel
    account: user@example.com

  # Define a workflow for those events
  counterWorkflow:
    type: Sequential Workflow
    channel: incrementChannel
    steps:
      - name: Increment Counter
        type: Update Document
        changeset:
          - op: replace
            path: /counter
            val: ${document('/counter') + 1}
```

This document listens to a timeline and increments a counter each time an event arrives.

## Processing Example

Here's how this document would be processed in Java:

```java
// Parse the document
Blue blue = new Blue(nodeProvider);
Node doc = blue.yamlToNode(yaml);

// Create a processor for the document
DocumentProcessor processor = blue.processor(doc);

// Process an incoming event
ProcessingResult result = processor.processEvent(timelineEvent);

// Get the updated document and triggered events
Node updatedDoc = result.getUpdatedDocument();
List<Node> triggeredEvents = result.getTriggeredEvents();

// The counter is now incremented
// updatedDoc will have counter = 1
```

We can also convert the Blue document to a Java object:

```java
// Define a Java class representing the document
@Data
public class SimpleCounter {
  private int counter;
}

// Convert the updated document to a Java object
SimpleCounter simpleCounter = blue.nodeToObject(updatedDoc, SimpleCounter.class);

// Access the counter property
assert simpleCounter.getCounter() == 1;
```

This example demonstrates the complete lifecycle of contract processing:

1. An event arrives through the `incrementChannel`
2. The `counterWorkflow` processes the event
3. The document is updated with counter incremented
4. The processor returns the updated document
5. The document can be mapped to Java objects for easier manipulation

Each processor following the Blue contract specification will reach identical results when processing the same events in the same order.



====================
DOCUMENT: Processing Checkpoints
CATEGORY: contracts
URL: https://language.blue/docs/contracts/checkpoints
====================


# Processing Checkpoints

Blue documents evolve as they process events, changing their state—and therefore their BlueId—while maintaining their fundamental identity. Processing checkpoints capture this temporal dimension, serving as formal markers in a document's ongoing journey.

## The Need for State Tracking

Consider a document that processes events from a timeline:

```yaml
name: Simple Counter
counter: 0
contracts:
  incrementChannel:
    type: Timeline Channel
    account: user@example.com

  counterWorkflow:
    type: Sequential Workflow
    channel: incrementChannel
    steps:
      - name: Update Counter
        type: Update Document
        changeset:
          - op: replace
            path: /counter
            val: ${document('/counter') + 1}
```

As this document processes events, its state changes:

- After one event: `counter: 1`
- After two events: `counter: 2`
- After three events: `counter: 3`

Each state change creates a new BlueId—a new content hash representing the document's new reality. Without some record of which events created this state, processors would have no way to know where in the event stream this document currently exists.

## The Processing Checkpoint Contract

The Processing Checkpoint contract serves as a formal record of a document's position in time:

```yaml
name: Simple Counter
counter: 2
contracts:
  incrementChannel:
    type: Timeline Channel
    account: user@example.com

  counterWorkflow:
    type: Sequential Workflow
    channel: incrementChannel
    steps:
      - name: Update Counter
        type: Update Document
        changeset:
          - op: replace
            path: /counter
            val: ${document('/counter') + 1}

  checkpoint:
    type: Processing Checkpoint
    # Implementation details follow
```

This contract is optional, but when present, processors are expected to automatically update it after processing each event, maintaining an accurate record of the document's temporal position.

## Common Checkpoint Types

The Blue language allows various checkpoint implementations. Here are the most common types:

### Channel Event Checkpoint

This approach records the BlueId of the last processed event from each channel:

```yaml
checkpoint:
  type: Channel Event Checkpoint
  lastEvents:
    incrementChannel:
      blueId: 7UEBwTmRMfQ92rGt4vHkzPa8Ypd5KJsLNcA3FV6xDqbn # Last processed event
```

This approach:

- Provides exact tracking of which events were processed
- Works reliably across decentralized processors
- Supports multi-channel documents with independent event streams

For documents with multiple channels, all are tracked independently:

```yaml
checkpoint:
  type: Channel Event Checkpoint
  lastEvents:
    userChannel:
      blueId: 7UEBwTmRMfQ92rGt4vHkzPa8Ypd5KJsLNcA3FV6xDqbn
    paymentChannel:
      blueId: CgJ83PcZwuAHboEL6AjrbJsxTWRTPzY8WR8Y2zAMXRbyd
    weatherChannel:
      blueId: JXiEdbLToPRWNQEekNxrxfQWBG8wmfjEqugPEEXByMYW
```

### Timestamp Checkpoint

For some scenarios, a timestamp-based checkpoint is used:

```yaml
checkpoint:
  type: Timestamp Checkpoint
  timestamp: '2023-10-15T14:30:00Z' # Last processed event timestamp
```

Important considerations for timestamp checkpoints:

- The timestamp must be deterministic (typically derived from event timestamps)
- All processors must interpret time identically
- Precision must be sufficient to distinguish between events

Timestamp checkpoints work best in controlled environments where:

- Events have reliable timestamps
- Event ordering by time is consistent
- Time interpretation is standardized across processors

## The River of Time: Document Identity Across States

In traditional databases, identity is imposed externally: a record with ID 12345 remains "the same record" regardless of how its contents change. But in a content-addressed system like Blue, the fundamental question becomes more profound: what makes a document "the same document" when its content—and thus its BlueId—changes over time?

The answer lies in the Processing Checkpoint contract, which creates a formal history of causality. This history binds different document states together, creating a continuous identity through time:

```yaml
# The document in its past (BlueId: a1b2c3...)
name: Simple Counter
counter: 0
contracts:
  # Channel and workflow definitions
  checkpoint:
    type: Channel Event Checkpoint
    lastEvents:
      incrementChannel: { blueId: null } # No events processed yet
```

```yaml
# The document in its present (BlueId: d4e5f6...)
name: Simple Counter
counter: 2
contracts:
  # Channel and workflow definitions
  checkpoint:
    type: Channel Event Checkpoint
    lastEvents:
      incrementChannel: { blueId: 8Y2zAMXRbydQj83PcZwuAHboEL6AjrbJsxTWRTPzY }
```

These two documents have different BlueIds because their content has changed, yet they represent the same logical entity at different points in its lifecycle. The checkpoint creates a causal connection between them, showing how one evolved into the other.

This is more than just a technical feature—it's a formal model of identity through change. Like the philosophical Ship of Theseus which remains "the same ship" even as all its planks are replaced over time, a Blue document remains "the same document" through its processing history despite having completely different content and BlueId.

## The Three Temporal Dimensions

The Processing Checkpoint gives every document three temporal dimensions:

### 1. Verifiable Past

The checkpoint's record of processed events creates a verifiable history. Any processor can:

- Validate that the document legitimately evolved from a known earlier state
- Reproduce the document's current state by replaying events
- Verify the integrity of document processing

### 2. Known Present

The checkpoint itself represents the document's "now"—its current position in the event timeline. This gives processors:

- A clear understanding of what state the document represents
- Confidence in which events have been incorporated
- A foundation for consistent behavior across systems

### 3. Processable Future

By knowing exactly which events have been processed, the checkpoint enables:

- Precise continuation of processing from the current state
- Clear criteria for what constitutes a "new" event
- Orderly evolution to future states

## Automatic Checkpoint Management

Blue processors update checkpoints automatically after each event is processed:

```java
// Process an event
ProcessingResult result = processor.processEvent(event);
Node updatedDoc = result.getUpdatedDocument();

// The checkpoint is automatically updated
Node checkpoint = updatedDoc.getAsNode("/contracts/checkpoint/lastEvents/incrementChannel");
String lastEventId = checkpoint.getBlueId();
assert lastEventId.equals(event.getBlueId());
```



====================
DOCUMENT: Events from Multiple Sources
CATEGORY: contracts
URL: https://language.blue/docs/contracts/coordination
====================


# Events from Multiple Sources

Blue documents often receive events from multiple channels. While each channel maintains its own ordering, a key challenge arises: how should events from different channels be sequenced relative to each other? This page explains how Blue handles this challenge through channel coordination contracts.

## The Multi-Channel Counter Problem

Let's start with a concrete example: a document that counts events from both Alice and Bob:

```yaml
name: Alice and Bob Counter
counterAlice: 0
counterBob: 0
contracts:
  aliceChannel:
    type: Timeline Channel
    account: alice@example.com

  bobChannel:
    type: Timeline Channel
    account: bob@example.com

  aliceWorkflow:
    type: Sequential Workflow
    channel: aliceChannel
    steps:
      - name: Update Alice Counter
        type: Update Document
        changeset:
          - op: replace
            path: /counterAlice
            val: ${document('/counterAlice') + 1}

  bobWorkflow:
    type: Sequential Workflow
    channel: bobChannel
    steps:
      - name: Update Bob Counter
        type: Update Document
        changeset:
          - op: replace
            path: /counterBob
            val: ${document('/counterBob') + 1}
```

This document works well when processed by a single processor that receives events in a fixed order. But what happens when multiple processors independently handle the same document? Without coordination, they might process the events in different orders, leading to different final states.

## The Coordination Problem

Consider this sequence of events:

1. Alice adds event A1 to her timeline
2. Bob adds event B1 to his timeline
3. Alice adds event A2 to her timeline

Processor 1 might see: A1 → B1 → A2  
Processor 2 might see: A1 → A2 → B1  
Processor 3 might see: B1 → A1 → A2

Without coordination, these processors will reach different document states, breaking Blue's deterministic processing guarantee. This is where channel coordination contracts become essential.

## Centralized Coordinator

For scenarios where centralized processing is acceptable, the Centralized Coordinator provides a straightforward solution:

```yaml
name: Alice and Bob Counter with Central Coordination
counterAlice: 0
counterBob: 0
contracts:
  aliceChannel:
    type: Timeline Channel
    account: alice@example.com

  bobChannel:
    type: Timeline Channel
    account: bob@example.com

  channelCoordinator:
    type: Centralized Coordinator
    authority: myos.blue
    channels: [aliceChannel, bobChannel]

  # Workflows as before
```

### Benefits and Limitations

**Benefits:**

- Simple implementation
- Immediate ordering decisions
- No coordination overhead for participants
- Higher performance

**Limitations:**

- Requires trust in the central authority
- Creates a single point of failure

## Vector Clock Coordinator

The Vector Clock Coordinator enables decentralized coordination between independent timelines by implementing a logical clock system:

```yaml
name: Alice and Bob Counter with Vector Coordination
counterAlice: 0
counterBob: 0
contracts:
  aliceChannel:
    type: Timeline Channel
    account: alice@example.com

  bobChannel:
    type: Timeline Channel
    account: bob@example.com

  channelCoordinator:
    type: Vector Clock Coordinator
    channels: [aliceChannel, bobChannel]

  # Workflows as before
```

### How Vector Clock Coordination Works

This coordination mechanism:

1. Requires each timeline entry to include a vector clock (counters for all participants)
2. Establishes causal relationships between events based on these vectors
3. Orders concurrent events deterministically using timeline IDs
4. Ensures all processors reach the same event ordering

In practice, each timeline entry includes a vector showing what the participant has seen:

```yaml
# Example entry in Alice's timeline
type: Vector Clock Message
message: 'Increment counter'
vectorClock:
  alice@example.com: 2 # Alice's timeline (incremented)
  bob@example.com: 1 # Last seen event from Bob
```

The Vector Clock Coordinator enables resilient decentralized coordination even during network partitions, though it requires active participation from all channels.

### Benefits and Limitations

**Benefits:**

- Works in fully decentralized environments
- No central authority required
- Guarantees causal consistency
- Participants maintain autonomy

**Limitations:**

- Requires active participation from all channels
- Increases timeline entry complexity
- May delay event processing while waiting for acknowledgments
- Creates some performance overhead

## Ethereum Coordinator

The Ethereum Coordinator uses an Ethereum smart contract to establish a canonical ordering:

```yaml
name: Alice and Bob Counter with Blockchain Coordination
counterAlice: 0
counterBob: 0
contracts:
  aliceChannel:
    type: Timeline Channel
    account: alice@example.com

  bobChannel:
    type: Timeline Channel
    account: bob@example.com

  channelCoordinator:
    type: Ethereum Coordinator
    contractAddress: '0x742d35Cc6634C0532925a3b844Bc454e4438f44e'
    chainId: 1
    channels: [aliceChannel, bobChannel]

  # Workflows as before
```

### How the Ethereum Coordinator Works

This coordination mechanism:

1. Requires participants to submit event hashes to an Ethereum smart contract
2. Uses the blockchain's ordering (block number and transaction index) to sequence events
3. Ensures all processors can independently verify the same sequence

The Ethereum smart contract typically implements a simple event registry:

```solidity
// Simple Event Registry Contract
contract EventRegistry {
    event EventRegistered(bytes32 eventId, uint256 timestamp);

    function registerEvent(bytes32 eventId) external {
        emit EventRegistered(eventId, block.timestamp);
    }
}
```

Processors observe the Ethereum blockchain to determine the canonical order of registered events.

### Benefits and Limitations

**Benefits:**

- Decentralized without requiring participant cooperation
- Provides tamper-resistant ordering
- Resistant to participant collusion
- Creates publicly verifiable evidence

**Limitations:**

- Higher cost (Ethereum gas fees)
- Slower confirmation times
- Requires Ethereum infrastructure
- Adds blockchain dependency

## Choosing a Coordination Strategy

When designing Blue documents with multiple channels, consider these factors:

1. **Trust Model**: Is a central authority acceptable, or is full decentralization required?

2. **Performance Needs**: Is immediate processing critical, or can events wait for coordination?

3. **Participant Capabilities**: Can all participants implement complex coordination protocols?

4. **Resilience Requirements**: How important is operation during infrastructure disruptions?



====================
DOCUMENT: Channel Binding
CATEGORY: contracts
URL: https://language.blue/docs/contracts/channel-binding
====================


# Channel Binding

Blue documents can interact with events from the external world - from cryptocurrency transactions to weather conditions. This creates a powerful connection between digital documents and physical reality, but introduces an important question: how do we establish a shared understanding of external events?

## Individual Interpretation: A Starting Point

Let's explore a document that tracks two different types of external events:

```yaml
name: External Reality Counter
btcBlockCount: 0
sunnyDayCount: 0
contracts:
  btcChannel:
    type: BTC Ledger Channel
    network: mainnet

  weatherChannel:
    type: Channel
    description: 'Incoming events represent sunny days in New York'

  btcBlockWorkflow:
    type: Sequential Workflow
    channel: btcChannel
    steps:
      - name: Increment BTC Block Counter
        type: Update Document
        changeset:
          - op: replace
            path: /btcBlockCount
            val: ${document('/btcBlockCount') + 1}

  weatherWorkflow:
    type: Sequential Workflow
    channel: weatherChannel
    steps:
      - name: Increment Sunny Day Counter
        type: Update Document
        changeset:
          - op: replace
            path: /sunnyDayCount
            val: ${document('/sunnyDayCount') + 1}
```

This document increments counters when Bitcoin blocks are confirmed or when there's a sunny day in New York. By default, each processor can decide for itself:

- How to connect to the Bitcoin network
- What constitutes a "sunny" day in New York

This approach works in scenarios where:

- The document is processed by a single entity making its own determinations
- Participants have a shared understanding of how to interpret external events
- Precise agreement isn't critical to the document's purpose

This mirrors everyday casual agreements like "If it's sunny tomorrow, let's go to the park" - we don't typically specify the exact definition of "sunny" or which weather service to consult.

However, if Alice and Bob both process this document independently, they'll likely reach different counts based on their individual interpretations of "sunny" or which Bitcoin node they connect to.

## Establishing Shared Sources: Channel Bindings

For most practical Blue documents, we need to ensure all processors work with the same events. This is where channel bindings come in - they establish authoritative sources for events.

Let's modify our counter to use shared event sources:

```yaml
name: Shared Reality Counter
type: External Reality Counter
contracts:
  mainBtcLedgerChannel:
    type: MyOS Timeline Channel
    account: main-btc-channel@myos.blue
    event:
      message:
        type: Block Confirmation

  weatherObservationChannel:
    type: MyOS Timeline Channel
    account: new-york-weather-channel@myos.blue
    event:
      message:
        type: Weather Observation
        condition: sunny

  channelBindings:
    type: Channel Source Binding
    btcChannel: mainBtcLedgerChannel
    weatherChannel: weatherObservationChannel
```

With this configuration:

1. The abstract `btcChannel` now receives events from the concrete `mainBtcLedgerChannel`
2. The abstract `weatherChannel` receives events from the concrete `weatherObservationChannel`
3. All processors using this document will work with identical events
4. The concrete MyOS channels use their `event` matcher to accept only `Block Confirmation` and `Weather Observation` (with `condition: sunny`) entries from the underlying timelines; all other entries are ignored at this scope

The `Channel Source Binding` preserves the document's logical structure while ensuring all processors share the same understanding of external events.

## Why Channel Bindings Matter

Channel bindings solve a fundamental challenge in distributed systems: establishing agreement about external reality. By explicitly defining authoritative sources, Blue documents can:

1. **Create consistency across processors**: Everyone works with the same inputs
2. **Maintain document simplicity**: Keep the core logic focused on what happens, not where events come from
3. **Enable verification**: Observers can check the authoritative sources
4. **Preserve logical structure**: Abstract channels define what the document responds to, concrete channels define where events come from

This pattern is essential for practically any Blue document that needs to function across multiple processors or participants.



====================
DOCUMENT: Sequential Workflows
CATEGORY: contracts
URL: https://language.blue/docs/contracts/sequential-workflow
====================


# Sequential Workflows

Sequential Workflows are the processing engines of Blue documents, defining how documents respond to events with a series of ordered steps. These workflows enable deterministic business logic that transforms static documents into dynamic, responsive entities.

## The Workflow Concept

At its core, a Sequential Workflow is a contract that:
1. Listens to a specific channel for events
2. Processes those events through an ordered sequence of steps
3. Updates the document or triggers additional events
4. Produces deterministic results across all processors

Unlike traditional programming where execution can follow many paths, Sequential Workflows guarantee the same outcome for every processor given the same inputs—critical for maintaining Blue's trustless verification model.

## A Practical Example: Online Lesson

Let's explore Sequential Workflows through a practical example—an online tutoring platform where teachers and students coordinate lessons.

First, let's define the confirmation event type that our workflow will process:

```yaml
name: Online Lesson Confirmation
confirmLesson:
  description: Determines if the lesson should be confirmed or not
  type: Boolean
priceInUsd:
  description: Price in USD
  type: Double
```

Now, let's create our main document type:

```yaml
name: Online Lesson
lessonDate:
  description: Scheduled date and time for the lesson
  type: DateTime
durationInMinutes:
  description: Duration of the lesson in minutes
  type: Positive Integer
approved:
  description: Whether the teacher has approved this lesson
  type: Boolean
  default: false
priceInUsd:
  description: Price of the lesson in USD
  type: Double
teacherMessage:
  description: Message from the teacher about the lesson
  type: Text
contracts:
  teacherChannel:
    description: Channel for teacher communication
    type: Timeline Channel
  
  studentChannel:
    description: Channel for student communication
    type: Timeline Channel
    
  confirmLessonWorkflow:
    type: Sequential Workflow
    channel: teacherChannel
    event:
      message:
        type: Online Lesson Confirmation
    steps:
      - name: ProcessConfirmation
        type: JavaScript Code
        code: |
          // Extract confirmation data from the event
          const confirmation = event.message;
          
          // Prepare data for document update
          return {
            approved: confirmation.confirmLesson,
            price: confirmation.confirmLesson ? confirmation.priceInUsd : null,
            message: confirmation.confirmLesson ? 
              "Lesson confirmed at $" + confirmation.priceInUsd : 
              "Lesson declined by teacher"
          };
      - name: UpdateLessonStatus
        type: Update Document
        changeset:
          - op: replace
            path: /approved
            val: ${steps.ProcessConfirmation.approved}
          - op: replace
            path: /priceInUsd
            val: ${steps.ProcessConfirmation.price}
          - op: replace
            path: /teacherMessage
            val: ${steps.ProcessConfirmation.message}
```

This document defines:
- Basic properties for an online lesson (date, duration, approval status)
- Channels for both teacher and student
- A Sequential Workflow that processes confirmation events from the teacher

:::note Timeline Entry Payloads
Timeline Channels deliver complete Timeline Entries. Access metadata such as actor or timestamp via `event`, and read the business payload via `event.message`.
:::

## Creating a Specific Lesson

Using the Online Lesson type, we can create a specific lesson instance:

```yaml
name: Alice-Bob Lesson
type: Online Lesson
lessonDate:
  year: 2025
  month: 5
  day: 1
  hour: 13
durationInMinutes: 90
```

Initially, this lesson is pending approval. The teacher (Alice) must send a confirmation event to approve or decline it:

```yaml
# Approval timeline entry
type: Timeline Entry
timeline:
  type: MyOS Timeline
  account: alice@example.com
message:
  type: Online Lesson Confirmation
  confirmLesson: true
  priceInUsd: 99.99

# Or, alternatively, rejection message
type: Timeline Entry
timeline:
  type: MyOS Timeline
  account: alice@example.com
message:
  type: Online Lesson Confirmation
  confirmLesson: false
```

When the workflow processes this event, it will:
1. Extract confirmation details in the JavaScript step
2. Update the document's approval status, price, and message in the Update Document step

After processing an approval, the document would look like this:

```yaml
name: Alice-Bob Lesson
type: Online Lesson
lessonDate:
  year: 2025
  month: 5
  day: 1
  hour: 13
durationInMinutes: 90
approved: true
priceInUsd: 99.99
teacherMessage: "Lesson confirmed at $99.99"
```

The key benefit here is that both Alice and Bob can run this document using their own independent processors. As long as they have access to the same timeline events, they will always reach identical document states—ensuring consistent interpretation without requiring central coordination.

## Workflow Components

Let's examine the key components of Sequential Workflows:

### Channel Binding

Every workflow must specify which channel it listens to:

```yaml
confirmLessonWorkflow:
  type: Sequential Workflow
  channel: teacherChannel
```

This binding determines which events will trigger the workflow. A single channel can trigger multiple workflows, and a document can contain many workflows listening to different channels.

If you need a single workflow to react to events from multiple channels, define a [Composite Timeline Channel](composite-timeline-channel.md) and point the workflow at it. The workflow will run once per matching child channel, and `event.meta.compositeSourceChannelKey` tells you which child produced the event.

### Event Filtering (Optional)

Workflows can optionally filter for specific event types:

```yaml
event:
  message:
    type: Online Lesson Confirmation
```

When specified, only Timeline Entries whose payload matches this filter will trigger the workflow. Without this filter, the workflow processes all events from the channel.

### Steps

The heart of a workflow is its sequence of steps, executed in order:

```yaml
steps:
  - name: Step1
    # Step details
  - name: Step2
    # Step details
```

Each step receives:
- The triggering event (Timeline Entry, including payload via `event.message`)
- The current document state
- Results from all previous steps

## Common Step Types

Blue provides several standard step types that all processors must implement, but the architecture is extensible—allowing for custom step types as needed.

### JavaScript Code Step

The JavaScript Code Step enables custom logic with full access to event data and document state:

```yaml
- name: ProcessConfirmation
  type: JavaScript Code
  code: |
    // Timeline payload data is available via event.message
    const confirmation = event.message;
    
    // Document access via document() function
    const currentDuration = document('/durationInMinutes');
    
    // Return data for use in subsequent steps
    return {
      approved: confirmation.confirmLesson,
      price: confirmation.confirmLesson ? confirmation.priceInUsd : null
    };
```

By default, `event`, `document()`, and `currentContract` return the Blue node mapped via the **simple** JSON strategy (values and arrays are plain, metadata is omitted). When you need the canonical representation that includes `value`, `items`, `type`, etc., use the `.canonical` helpers and the `canon` utility to navigate or unwrap the result.

#### JavaScript Context

The JavaScript environment provides these capabilities:

| Variable/Function | Description |
|-------------------|-------------|
| `event` | Triggering event as plain JSON (`'simple'` mapping) |
| `eventCanonical` | Event rendered with canonical metadata (`'official'` mapping) |
| `currentContract` | Handler contract currently executing, as plain JSON (`'simple'` mapping) |
| `currentContractCanonical` | Handler contract with canonical metadata (`'official'` mapping) |
| `document(path)` | Reads document state at the specified (absolute, relative, or empty) path as plain JSON |
| `document.canonical(path)` | Reads document state using canonical metadata |
| `steps` | Object containing results from previous steps |
| `canon` | Helper namespace: `canon.unwrap(canonical, deep=true)` converts canonical JSON back to plain objects/arrays, and `canon.at(canonical, pointer)` navigates canonical structures using JSON Pointers |

The runtime is a deterministic QuickJS sandbox (Wasm). Non-deterministic or heavy APIs are disabled, including:
- `Date`, `Math.random`, `eval`/`Function`, `Promise`/async, `RegExp`, `Proxy`
- `ArrayBuffer`/typed arrays, `WebAssembly`, `console`, `Array.prototype.sort`

`JSON.parse` and `JSON.stringify` are available, but in a limited deterministic form:
- `JSON.parse` accepts standard JSON only. `reviver` is not supported.
- `JSON.stringify` always returns canonical, compact JSON. `replacer` and pretty-printing are not supported.

Values crossing the VM boundary must be DV-compatible (null, boolean, number, string, array, object with string keys). JavaScript execution is metered by the VM; the processor converts VM fuel to host gas using a fixed calibration factor (currently 1700 fuel per host gas unit), in addition to standard processor gas charges.

### Update Document Step

The Update Document Step modifies the document using standard JSON Patch operations:

```yaml
- name: UpdateLessonStatus
  type: Update Document
  changeset:
    - op: replace
      path: /approved
      val: ${steps.ProcessConfirmation.approved}
    - op: replace
      path: /priceInUsd
      val: ${steps.ProcessConfirmation.price}
```

#### Available Operations

| Operation | Description |
|-----------|-------------|
| `replace` | Replace an existing value |
| `add` | Add a new value (or create if path doesn't exist) |
| `remove` | Remove a value |
| `copy` | Copy from one location to another |
| `move` | Move from one location to another |

#### Template Expressions

Within an Update Document step, you can use JavaScript template expressions with the `${...}` syntax:

```yaml
- op: replace
  path: /total
  val: ${document('/subtotal') * 1.1}  # Add 10% tax
```

Template expressions are evaluated by the same deterministic QuickJS VM used for JavaScript Code steps. They use the same bindings and limits, including the restricted `JSON.parse` / `JSON.stringify` support, and their VM fuel usage is converted into host gas using the same calibration factor.

These expressions can reference:
- Document values via `document('/path')`
- Previous step results via `steps.StepName.property`
- Event metadata via `event.property`
- Contract metadata via `currentContract.property` (use `currentContractCanonical` for canonical metadata)
- Timeline payload fields via `event.message.property`

### Custom and Specialized Steps

The Blue ecosystem supports extensible step types:

- **Standard Steps**: Must be implemented by all processors (JavaScript Code, Update Document, [Trigger Event](events))
- **Custom Steps**: Can be created for specialized purposes
- **Platform-Specific Steps**: Platforms like MyOS offer additional steps like "Call Agent" for integrating with their agent ecosystem

As with all other contracts, make sure that processors you and others you work with have support for the contract types and workflow step types defined in your documents - the more standard steps you use, the easier it will be to ensure compatibility across different environments.



====================
DOCUMENT: Embedded Documents
CATEGORY: contracts
URL: https://language.blue/docs/contracts/embedded-documents
====================


# Embedded Documents

Blue documents can embed other documents that have their own contracts and behaviors. This creates compositional structures where both parent and embedded documents process events independently while maintaining a consistent state. This page explains how to use these embedded documents and ensure they're properly processed.

## Moving Beyond Simple Documents

While previous examples focused on single documents with contracts, real-world applications often require complex document structures with embedded components:

- Payment agreements with embedded payment method documents
- Event contracts with embedded ticket verification systems
- Legal agreements with embedded identity verification flows

These embedded documents can have their own channels, workflows, and processing states. However, Blue doesn't automatically process every embedded document—it requires explicit instructions about which embedded documents should be active.

## Extending Our Online Lesson Example

Let's extend our online tutoring platform to include payment processing. In addition to setting the price as before, we'll now have the teacher provide a complete payment document:

```yaml
name: Online Lesson with Payment
type: Online Lesson
payment:
  description: Payment details for this lesson
  type: Stripe Payment
contracts:
  # Enable processing of the embedded payment document
  embedded:
    type: Process Embedded
    paths:
      - /payment
  
  # Extend the existing workflow to set the payment
  confirmLessonWorkflow:
    steps:
      - blueId: bT4vHkzPa8Ypd5KJsLNcA3FV6xDqbnG1pcQx2tq16z5  # BlueId of the original steps
      - name: UpdatePayment
        type: Update Document
        changeset:
          - op: replace
            path: /payment
            val: ${event.message.payment}
```

For now, we're simply embedding the payment document so it's included in processing. In the [Events](events) section, we'll explore how this payment document can interact with the parent document through events.

:::tip Workflow Extension Pattern
In the example above, we're not redefining the entire workflow - we're extending it. By referencing the BlueId of the existing steps defined in the "Online Lesson" type and then adding another step, we're effectively saying "do everything the original workflow does, then do this additional step."

This builds on Blue's [type extending mechanism](../language/types) and creates a clean inheritance pattern where we maintain all the original functionality while adding payment-specific behavior.
:::

## Explicit Processing with Process Embedded

The key to working with embedded documents is the `Process Embedded` contract:

```yaml
embedded:
  type: Process Embedded
  paths:
    - /payment
    - /gameStats
```

This contract tells the processor exactly which embedded documents should be processed. Without this declaration, embedded documents remain static data—their contracts aren't evaluated, and they don't receive events.

### The Rationale for Selective Processing

Consider a document with multiple embedded references:

```yaml
name: Order #12345
customer:
  blueId: HVkRPb4L3EQey6rdwypi7na7ATz5P3PLH8Q3PSy2WQ94  # Customer profile
payment:
  blueId: 2qCKS6yS11cYGabM3nPXsbd6DYK4m6WNrEqGGyySDCaD  # Payment document
productCatalog:
  blueId: kL9mN0pQrStUvWxYz1A2B3C4D5E6F7G8H4FGh7j8  # Entire catalog with thousands of products
```

Automatically processing every BlueId reference would be inefficient, especially for large references like product catalogs. The explicit `Process Embedded` approach ensures processors only extend and process the specific paths needed for the document's functionality.

## How Embedded Document Processing Works

When a processor encounters a `Process Embedded` contract, it:

1. Extends all specified paths to retrieve the full embedded documents
2. Evaluates all contracts within those embedded documents
3. Processes events for both the parent and embedded documents
4. Maintains checkpoints independently for each document
5. Returns the updated parent document with updated embedded documents

This creates a composite processing environment where multiple documents work together while maintaining their independence.

## Temporal Synchronization with Checkpoints

It may happen that an embedded document is temporally behind the parent document (has an earlier [checkpoint](checkpoints)). When this occurs, the processor must handle the temporal differences.

### Channel Coordination During Embedding

When multiple documents with their own contracts are processed together, they must share a common coordinator to determine event ordering. This coordinator ensures consistent event processing across all embedded documents.

For documents with different temporal positions, the coordinator will process earlier events first. This means:

1. The coordinator collects events from all channels across all processed documents
2. It orders these events according to its coordination strategy
3. When an embedded document is behind temporally, its events will naturally come first in the ordered sequence
4. The embedded document will process all its backlogged events before the parent document processes newer events

This isn't the processor "recognizing a gap"—it's simply the coordinator doing its normal job of chronologically ordering events from all available channels.

## Embedded Node Channels

Embedded documents can also serve as event sources through the `Embedded Node Channel` contract:

```yaml
gameEventsChannel:
  type: Embedded Node Channel
  path: /game
```

This contract treats events from the embedded document as if they were coming from a separate channel, allowing the parent document to listen for and react to them.

## Example: Historical Game Processing

This example shows how to embed a historical game document and track specific events:

```yaml
name: LeBron Blocks Counter
blocks: 0
game:
  blueId: 7UEBwTmRMfQ92rGt4vHkzPa8Ypd5KJsLNcA3FV6xDqbn  # Cavs-Warriors Game 7 Finals 2016
contracts:
  # Enable processing of the embedded game
  embedded:
    type: Process Embedded
    paths:
      - /game

  # Channel for game events
  gameChannel:
    type: Embedded Node Channel
    path: /game
  
  # Listen for blocking events
  blocksTracker:
    type: Sequential Workflow
    channel: gameChannel
    event:
      type: Blocked Shot
      player: LeBron James
    steps:
      - type: Update Document
        changeset:
          - op: replace
            path: /blocks
            val: ${document('/blocks') + 1}
```

### How This Works

In this example, we're embedding a game from 2016 into a current document. When processing begins:

1. The coordinator will see that the game document is temporally far behind (from 2016)
2. All game events will be sequenced before any current-day events
3. The game events will be processed in chronological order, incrementing the blocks counter
4. This effectively "replays" the historical game within our document

We're essentially playing back a complete game recording within our document, with our contracts reacting to it. Since the game is from the past, all its events will be processed before any current events from other channels.

### Circular References

To learn how to address circular references go to [Blue Language Specification - Circular References](../spec#11-circular-references-combined-blueid-for-direct-cycles)



====================
DOCUMENT: Events
CATEGORY: contracts
URL: https://language.blue/docs/contracts/events
====================


# Events

Events are the foundation of Blue document interactions—they flow into documents through channels, trigger workflows, and can be emitted by documents during processing. This page explains how events function within the Blue ecosystem and how documents can both react to and generate events.

## The Event Lifecycle

Every Blue document interaction follows this pattern:

1. An event enters a document through a channel
2. Workflows process the event, potentially updating the document state
3. During processing, the document may generate new events
4. Any generated events are included in the processing result
5. Events directed to internal channels are also processed immediately as part of the same cycle

## Extending Our Online Lesson Example

Building on the embedded payment document from the previous page, let's configure our lesson document to react to events from the embedded payment:

```yaml
name: Online Lesson with Payments and Events
type: Online Lesson with Payments

contracts:
  # Channel for payment events
  paymentChannel:
    type: Embedded Node Channel
    path: /payment
  
  # Handler for payment success
  paymentSuccessHandler:
    type: Sequential Workflow
    channel: paymentChannel
    event:
      type: Payment Successful
    steps:
      - name: UpdateLessonStatus
        type: Update Document
        changeset:
          - op: replace
            path: /status
            val: "paid"
```

In this example, we create a channel that listens for events from the embedded payment document. When the payment document processes a successful transaction, it emits a "Payment Successful" event. Our parent document receives this event through the `paymentChannel` and updates the lesson status to "paid."

This pattern creates a clean separation:
- The payment document handles transaction processing
- The lesson document manages the overall lesson state
- Events provide the communication bridge between them

## Document Update Channels

Document Update events occur whenever a specific path in a document changes. You can listen for these changes using a Document Update Channel:

```yaml
statusUpdateChannel:
  type: Document Update
  path: /status
```

This channel emits events whenever the `/status` field changes. These events include both the old and new values:

```yaml
type: Document Update
path: /status
before: "pending"
after: "confirmed"
```

:::note
Document Update channels only trigger when the exact node at the specified path changes, not when children of that path change. For example, `/payment/status` changes won't trigger a channel watching `/payment`.
:::

### Notifications

`Document Processing Notification` is a useful event type that can be used to show messages to user about processing changes:

```yaml
# Add to our Online Lesson contracts
statusUpdateChannel:
  type: Document Update
  path: /status

# Notification workflow for status changes
statusNotificationWorkflow:
  type: Sequential Workflow
  channel: statusUpdateChannel
  steps:
    - name: CreateNotification
      type: JavaScript Code
      code: |
        // Create appropriate notification based on new status
        const newStatus = event.after;
        const oldStatus = event.before;
        
        let message;
        switch(newStatus) {
          case "confirmed":
            message = "Your lesson has been confirmed! Please complete payment.";
            break;
          case "paid":
            message = "Payment received! Your lesson is fully booked.";
            break;
          default:
            message = `Lesson status changed from ${oldStatus} to ${newStatus}.`;
        }
        
        return {
          events: [
            {
              type: "Document Processing Notification",
              message: message
            }
          ]
        };
```

This workflow:
1. Listens for changes to the lesson status
2. Creates appropriate notification messages
3. Emits notification events

These notification events are included in the processing result and can be handled by external systems to send emails or other communications.

## Event Triggers

The Event Action Trigger contract provides a declarative way to transform events without requiring JavaScript code:

```yaml
lessonStatusTriggers:
  type: Event Action Trigger
  events:
    - on:
        type: Document Processing Initiated
      emit:
        type: Lesson Created
        createdAt: ${event.timestamp}
    - on:
        type: Document Update
        path: /status
        after: "paid"
      emit:
        type: Lesson Ready
        lessonId: ${document('/id')}
        scheduledTime: ${document('/lessonDate')}
```

This contract:
- Automatically generates a "Lesson Ready" event when status changes to "paid"
- Emits a "Lesson Created" event when the document is first processed

The `Document Processing Initiated` event is a system event that fires automatically when a document with no previous [checkpoint](checkpoints.md) is processed for the first time.

## Trigger Event Step

Events can also be triggered from [sequential workflows](sequential-workflow) through Trigger Event step:

```yaml
- name: SendConfirmation
  type: Trigger Event
  event:
    type: Lesson Confirmation
    message: "Your lesson has been confirmed for ${document('/lessonDate')}"
```

Trigger Event payload expressions are evaluated by the same deterministic QuickJS VM used for JavaScript Code steps, with the same bindings and restrictions. VM fuel usage is converted into host gas using the same calibration factor.

## Error Events

Blue represents errors as events, allowing consistent handling of both normal and error conditions. Error events typically include:

- **type**: The error category (Validation Error, Processing Error, Fatal Error)
- **message**: A human-readable explanation
- **context**: Additional information like affected fields or error codes

Common error types include:

| Error Type | Description | Processing Impact |
|------------|-------------|-------------------|
| Validation Error | Invalid input data | Processing continues |
| Fatal Error | Unrecoverable failure | Processing stops |

Fatal errors (such as division by zero in JavaScript) stop all processing and prevent document updates, ensuring documents remain in a valid state.

## Event Processing Mechanics

When a Blue processor receives an event, it follows this sequence:

1. The event enters through a channel
2. Workflows listening to that channel execute in a deterministic order
3. These workflows may update the document and generate new events
4. Generated events directed to internal channels are processed immediately
5. This cycle continues until all internal events are processed
6. The processor returns:
   - The final document state
   - All generated events



====================
DOCUMENT: Operations
CATEGORY: contracts
URL: https://language.blue/docs/contracts/operations
====================


# Operations

Operations are named, structured interfaces that documents expose to the outside world. They define specific actions that can be performed on a document, complete with formal request definitions.

## The Role of Operations in Blue

Blue documents interact with the world through channels, processing events that arrive through these pathways. Operations build on this foundation by creating well-defined action endpoints within a document.

When to use operations:

- To create clearly named actions that express intent (e.g., `approvePayment`, `scheduleLesson`)
- To enforce specific input structures and validation rules
- To organize document capabilities into logical functional units
- To enable discoverability of available actions

Operations provide an organizational layer that helps structure interactions with a document. They create a clear API for document manipulation while working through the same underlying channel mechanisms that power all Blue document interactions.

## Defining Operations

Operations are defined using the `Operation` contract:

```yaml
increment:
  type: Operation
  request:
    description: Represents a value by which counter will be incremented
    type: Integer
```

This definition specifies:
1. The operation name (`increment`)
2. The expected input structure (`request`)
3. A description for the request

## Implementing Operations

Once defined, operations need an implementation. The most common approach is using a `Sequential Workflow Operation`:

```yaml
incrementImpl:
  type: Sequential Workflow Operation
  operation: increment
  steps:
    - type: Update Document
      changeset:
        - op: replace
          path: /counter
          val: ${event.message.request + document('/counter')}
```

This implementation:
1. Links to the `increment` operation definition
2. Defines processing steps that execute when the operation is called

The operation's implementation can access:
- The request payload via `event.message`
- Timeline metadata (actor, timestamp, etc.) via `event`
- The current document state via `document()`
- The current handler contract via `currentContract` (use `currentContractCanonical` for canonical metadata)
- Any other contextual information needed

## Invoking Operations

Operations are invoked by posting a Timeline Entry whose `message` is an `Operation Request` to the document's channel:

```yaml
type: Timeline Entry
timeline:
  type: MyOS Timeline   # Or another Timeline Provider
  account: alice@example.com
message:
  type: Operation Request
  operation: increment        # Name of the operation to call
  request: 5                  # The increment value
  document:
    blueId: 7UEBwTmRMfQ92rGt4vHkzPa8Ypd5KJsLNcA3FV6xDqbn  # Target document ID
  allowNewerVersion: true     # Accept if document has changed
actor:
  type: Principal Actor
  name: Alice
```

### Request Formats

The `request` field can contain:

1. **Simple values** for primitive types:
   ```yaml
   request: 5  # Integer
   ```

2. **Structured objects** for complex types:
   ```yaml
   request:
     a: 12
     b: xyz
   ```

The request structure must match what's defined in the operation contract. If it doesn't match, the operation won't execute and will return an error.

### The `allowNewerVersion` Flag

The `allowNewerVersion` flag addresses an important concern: what happens if the document changes between when you retrieve it and when you send your operation?

- `true`: Process the operation on the latest version, even if it changed
- `false`: Only process if the document still has the same BlueId as specified

This gives callers control over concurrent modifications:
- Use `true` when the operation should work regardless of other changes
- Use `false` when the operation depends on the document being in a specific state

## Complete Counter Example

Let's bring everything together with our simple counter example:

```yaml
name: Simple Counter
counter: 0
contracts:
  ownerChannel:
    type: Timeline Channel
  
  # Defines the increment operation interface
  increment:
    type: Operation
    request:
      description: Represents a value by which counter will be incremented
      type: Integer

  # Implements the increment operation
  incrementImpl:
    type: Sequential Workflow Operation
    operation: increment
    steps:
      - type: Update Document
        changeset:
          - op: replace
            path: /counter
            val: ${event.message.request + document('/counter')}
```

When a client wants to increment the counter:

1. It posts a Timeline Entry containing an Operation Request to the document's timeline:
   ```yaml
   type: Timeline Entry
   timeline:
     type: MyOS Timeline
     account: alice@example.com
   message:
     type: Operation Request
     operation: increment
     request: 5
     document:
       blueId: 7UEBwTmRMfQ92rGt4vHkzPa8Ypd5KJsLNcA3FV6xDqbn
     allowNewerVersion: true
   actor:
     type: Principal Actor
     name: Alice
   ```

2. Any compliant document processor:
   - Receives the Timeline Entry through the `ownerChannel`
   - Extracts the Operation Request from `event.message`
   - Identifies the `increment` operation and its implementation
   - Executes the Update Document step, adding 5 to the counter

3. The counter is now incremented by 5

## Operations in MyOS

You can run the example above in [MyOS](https://myos.blye).

![Natural language operations](/img/myos/QuickStart/02.png)



====================
DOCUMENT: Composite Timeline Channel
CATEGORY: contracts
URL: https://language.blue/docs/contracts/composite-timeline-channel
====================


# Composite Timeline Channel

Composite Timeline Channels let you treat multiple timeline channels as one logical channel. They match if an incoming event would match any child channel (union). They are useful when a single workflow should respond to events from several sources while preserving each child's filtering and recency behavior.

## Basic Structure

```yaml
contracts:
  aliceChannel:
    type: Timeline Channel
    account: alice@example.com

  bobChannel:
    type: Timeline Channel
    account: bob@example.com

  activityChannel:
    type: Composite Timeline Channel
    channels: [aliceChannel, bobChannel]
```

The `channels` field lists contract keys in the same scope. Each child channel is evaluated independently, and the composite delivers one event per matching child in the order listed.

## Example: One Workflow, Two Sources

```yaml
contracts:
  teacherChannel:
    type: Timeline Channel
    account: teacher@example.com

  studentChannel:
    type: Timeline Channel
    account: student@example.com

  messagesChannel:
    type: Composite Timeline Channel
    channels: [teacherChannel, studentChannel]

  messageWorkflow:
    type: Sequential Workflow
    channel: messagesChannel
    steps:
      - name: RouteMessage
        type: JavaScript Code
        code: |
          const source = event.meta?.compositeSourceChannelKey;

          if (source === 'teacherChannel') {
            return { events: [{ type: 'Teacher Message' }] };
          }

          if (source === 'studentChannel') {
            return { events: [{ type: 'Student Message' }] };
          }

          return { events: [] };
```

## Event Metadata

When the composite delivers an event, it adds the child channel key to `event.meta.compositeSourceChannelKey`. Use this value to branch on the originating channel.

## Validation Rules

- Every entry in `channels` must reference a channel declared in the same `contracts` map.
- Composite channels may reference other composite channels, but cycles are not allowed.
- If `channels` is empty or missing, the composite never matches.

## Recency and Checkpoints

Composite channels keep per-child recency by checkpointing each child independently. An older event on one child will not block newer events from another child, and each child channel's own recency rules still apply.
