Skip to main content

What is TOON?

TOON (Token-Oriented Object Notation) is a compact data serialization format designed specifically for LLM prompts. It encodes structured data using 40-50% fewer tokens than JSON while remaining human-readable and LLM-parseable. ArgentOS uses TOON throughout the system wherever structured data needs to be injected into agent prompts.

Why Not Just Use JSON?

Every token in an LLM prompt costs money and takes up context window space. When you inject a 50-item task list or a 20-step workflow history into an agent’s prompt, JSON’s verbosity adds up fast:
FormatTokens (typical)Savings
JSON (pretty)1,000baseline
JSON (compact)75025%
TOON450-55040-50%
TOON achieves this by using a tabular format for uniform arrays, eliminating repeated keys, and using minimal delimiters.

Where ArgentOS Uses TOON

TOON is integrated into 9 core systems via src/utils/toon-encoding.ts:

Workflow Pipeline Context

When an agent step runs inside a workflow, it receives the full pipeline history encoded as TOON:
[PIPELINE_CONTEXT]
workflow.id: wf-123
workflow.runId: run-456
workflow.currentStep: 3
workflow.totalSteps: 5
steps:
  step | agent    | status    | duration | output
  1    | research | completed | 12s      | Found 5 relevant sources
  2    | analyze  | completed | 8s       | Key findings extracted
[/PIPELINE_CONTEXT]
This gives each agent full awareness of what happened before it, using a fraction of the tokens that JSON would require.

Agent Handoffs

When one agent passes work to another in a multi-agent workflow:
[AGENT_HANDOFF]
from: research-agent
to: writer-agent
summary: Research complete — 5 sources analyzed, 3 key findings
artifact: /tmp/research-output.md
[/AGENT_HANDOFF]

Memory Recall Results

When the agent recalls memories, results are TOON-encoded before injection:
[MEMORY_CONTEXT]
results:
  id   | type        | significance | text                          | created
  m-1  | observation | high         | User prefers concise answers  | 2026-03-15
  m-2  | interaction | medium       | Discussed project timeline    | 2026-03-20
[/MEMORY_CONTEXT]

SpecForge Task Breakdowns

When SpecForge creates atomic tasks for a project:
[TASK_BREAKDOWN]
project: Website Redesign
tasks:
  id | agent    | title              | deps | status  | acceptance
  1  | frontend | Homepage layout    | none | pending | Visual match mockup
  2  | backend  | API endpoints      | none | pending | All tests pass
  3  | frontend | Connect to API     | 1,2  | blocked | Data loads correctly
[/TASK_BREAKDOWN]

Other Integration Points

  • Knowledge Library results — RAG search results encoded for prompt injection
  • Team status — multi-agent family status updates
  • Tool results — uniform tool output arrays (search results, file listings)
  • SIS active lessons — self-improving system lesson injection
  • Cron definitions — scheduled task summaries

How It Works in Code

The encoding layer at src/utils/toon-encoding.ts provides typed functions:
import { encodeForPrompt, encodePipelineContext, encodeHandoff,
         encodeMemoryResults, encodeTaskBreakdown, encodeTeamStatus } from "./utils/toon-encoding.js";

// Generic encoding with optional label
const encoded = encodeForPrompt(data, "MY_CONTEXT");

// Typed encoders for specific use cases
const pipelineCtx = encodePipelineContext({ workflow, steps, task });
const handoff = encodeHandoff({ from, to, summary, artifact });
const memory = encodeMemoryResults(results);
const tasks = encodeTaskBreakdown(atomicTasks, projectName);
const team = encodeTeamStatus(members);
All functions fall back to compact JSON if TOON encoding fails for a particular data structure.

Decoding

TOON content can be decoded back to structured data:
import { decodeFromPrompt } from "./utils/toon-encoding.js";

const data = decodeFromPrompt(text, "PIPELINE_CONTEXT");
Falls back to JSON.parse if TOON decoding fails.

Learn More