Skip to main content
LCM is a DAG-based context compression engine that ships with ArgentOS Core. It replaces flat compaction with hierarchical summarization — every message is stored permanently in an immutable database, and agents can search and recall any past message even after compaction.

Why LCM Exists

Standard context management has a fundamental problem: when the context window fills up, old messages are summarized into a flat blob and the originals are discarded. The agent loses access to details discussed earlier in the conversation. LCM solves this by maintaining a hierarchical summary DAG (directed acyclic graph) alongside an immutable message store. Summaries compress the history for the model’s context window, but the originals are always available for recall via agent tools.

Within-session

LCM keeps the agent coherent during long conversations via DAG compression with full recall.

Across sessions

MemU provides persistent memory across months and years of interaction.

How It Works

Immutable Message Store

Every user message, assistant response, and tool result is persisted verbatim in a SQLite database (~/.argentos/lcm.db). Messages are never modified or deleted. This is the ground truth that everything else builds on.

The Summary DAG

As the context window fills, LCM compresses older messages into a hierarchical tree of summaries:
  • Leaf summaries (depth 0): Created from chunks of ~20,000 tokens of raw messages, compressed to ~1,200 tokens each
  • Condensed summaries (depth 1+): Created when 4+ same-depth summaries accumulate, merged into ~2,000 token nodes
  • Fresh tail: The 32 most recent messages are always kept raw (configurable)

Context Assembly

Each turn, the agent sees compressed history summaries followed by the fresh tail of recent raw messages. The summaries provide continuity while the fresh tail preserves full detail for the active conversation.

Three-Level Escalation

Compaction always converges, guaranteed:
LevelStrategyTemperatureWhen
NormalNarrative summary, preserve details0.2Default
AggressiveBullet points only, half tokens0.1Normal summary too large
DeterministicMechanical truncation, no LLMN/ALLM fails or returns oversized output

Agent Tools

LCM registers three tools that give agents recall capabilities over compacted history:

aos_lcm_grep

Full-text search across the entire conversation history, including messages compacted out of context.

aos_lcm_describe

Retrieve the full content of a specific message or summary node by ID.

aos_lcm_expand_query

Expand a summary back to its original source messages by walking the DAG.
Example workflow: Agent needs to recall something from hours ago that was compacted out:
  1. Agent calls aos_lcm_grep with a search query
  2. Gets ranked results with snippets and message IDs
  3. Calls aos_lcm_describe to read the full message
  4. Or calls aos_lcm_expand_query to expand a summary back to its originals

Configuration

LCM is enabled by default. Configuration is in argent.json:
{
  "plugins": {
    "entries": {
      "aos-lcm": {
        "enabled": true,
        "config": {
          "freshTailCount": 32,
          "contextThreshold": 0.75,
          "summaryModel": "auto"
        }
      }
    }
  }
}

Options

OptionDefaultDescription
enabledtrueEnable LCM context management
freshTailCount32Recent messages kept raw (not summarized)
contextThreshold0.75Trigger compaction at this fraction of context window
summaryModel"auto"Model for summarization. "auto" uses the model router
leafChunkTokens20000Token target per leaf compaction chunk
leafTargetTokens1200Token target for leaf summary output
condensedTargetTokens2000Token target for condensed summaries
incrementalMaxDepth-1Maximum DAG depth (-1 = unlimited)
largeFileTokenThreshold25000Files above this are stored externally
databasePath~/.argentos/lcm.dbCustom database path

Disabling LCM

{
  "plugins": {
    "entries": {
      "aos-lcm": { "enabled": false }
    }
  }
}

Large File Handling

Files over 25,000 tokens (configurable) are automatically stored externally with compact exploration summaries injected into context. The full content remains accessible via aos_lcm_describe.

Database

LCM uses its own standalone SQLite database at ~/.argentos/lcm.db, separate from MemU’s PostgreSQL. The database uses WAL mode for concurrent reads and includes FTS5 full-text indexes for fast grep searches.
TablePurpose
messagesImmutable message store
messages_ftsFTS5 full-text search index
summariesDAG summary nodes
summary_messagesLeaf summary to source message links
summary_sourcesCondensed summary to source summary links
context_itemsActive context window state
large_filesExternally stored large file metadata

Attribution

LCM is adapted from the Lossless Context Management architecture by Clint Ehrlich and Theodore Blackman at Voltropy PBC. The original lossless-claw OpenClaw plugin is MIT licensed. The ArgentOS adaptation adds integration with the gateway, model router, plugin hook system, and agent tool registration.