Skip to main content

Overview

The Provider Registry is ArgentOS’s dynamic catalog of LLM providers and their available models. It tracks API endpoints, authentication methods, model capabilities, pricing, and context windows for every provider the system can route to. The registry ships with seed data covering 11+ providers and 60+ models. Operators can customize entries, add new providers, and adjust model configurations. User edits are preserved across seed updates through a merge strategy that adds new seed entries without overwriting existing customizations.

How It Works

  1. On first run, the seed data is written to ~/.argentos/provider-registry.json.
  2. On subsequent runs, the file is read and its version compared against the seed version.
  3. If the file version is older, a merge is performed: new seed entries are added, existing user entries are preserved.
  4. If the file version is equal or newer, it is returned as-is.

Seed Providers

The current seed (version 7) includes these providers:
ProviderAPI FormatAuthNotable Models
MiniMaxAnthropic MessagesAPI KeyM2.5, M2.1, VL-01, M2-her
MiniMax PortalAnthropic MessagesOAuthM2.1
XiaomiAnthropic MessagesAPI KeyMiMo V2 Flash
MoonshotOpenAI CompletionsAPI KeyKimi K2.5
Qwen PortalOpenAI CompletionsOAuthQwen Coder, Qwen Vision
InceptionOpenAI CompletionsAPI KeyMercury 2
OllamaOpenAI CompletionsAPI KeyDynamic (local models)
LM StudioOpenAI CompletionsNoneDynamic (local models)
GroqOpenAI CompletionsAPI KeyLlama 3.3, Qwen3 32B, GPT OSS
Synthetic (HF)Anthropic MessagesAPI KeyDeepSeek, GLM-4.7, Kimi K2, Qwen3
VeniceOpenAI CompletionsAPI Key20+ models including proxied Claude, GPT, Gemini

API Formats

Providers use one of two API formats:
Anthropic Messages API format. Used by MiniMax, Synthetic, Xiaomi.
ArgentOS normalizes between these formats internally, so the model router can seamlessly fall back across providers regardless of their native API format.

Dynamic Providers

Some providers support dynamic model discovery:
const ollama: ProviderRegistryEntry = {
  name: "Ollama",
  baseUrl: "http://127.0.0.1:11434/v1",
  api: "openai-completions",
  dynamic: true,
  discoveryUrl: "http://127.0.0.1:11434/api/tags",
  models: []  // Populated at runtime from discovery
};
When dynamic: true, the system queries the discoveryUrl to populate the model list. This is how local providers like Ollama and LM Studio expose whatever models you have downloaded.

Registry File Structure

The registry file at ~/.argentos/provider-registry.json:
{
  "version": 7,
  "providers": {
    "minimax": {
      "name": "MiniMax",
      "baseUrl": "https://api.minimax.io/anthropic",
      "api": "anthropic-messages",
      "authType": "api_key",
      "envKeyVar": "MINIMAX_API_KEY",
      "models": [
        {
          "id": "MiniMax-M2.5",
          "name": "MiniMax M2.5",
          "reasoning": false,
          "input": ["text"],
          "cost": {
            "input": 15,
            "output": 60,
            "cacheRead": 2,
            "cacheWrite": 10
          },
          "contextWindow": 200000,
          "maxTokens": 8192
        }
      ]
    },
    "ollama": {
      "name": "Ollama",
      "baseUrl": "http://127.0.0.1:11434/v1",
      "api": "openai-completions",
      "authType": "api_key",
      "envKeyVar": "OLLAMA_API_KEY",
      "dynamic": true,
      "discoveryUrl": "http://127.0.0.1:11434/api/tags",
      "models": []
    }
  }
}

Provider Entry Schema

type ProviderRegistryEntry = {
  name: string;              // Display name
  baseUrl: string;           // API endpoint
  api: "openai-completions" | "anthropic-messages";
  authType: "api_key" | "oauth" | "none";
  envKeyVar?: string;        // Environment variable for API key
  oauthPlaceholder?: string; // OAuth profile name
  dynamic?: boolean;         // Enable runtime model discovery
  discoveryUrl?: string;     // URL for model list endpoint
  models: ProviderModelEntry[];
};

Model Entry Schema

type ProviderModelEntry = {
  id: string;                // Model identifier (sent to API)
  name: string;              // Display name
  reasoning: boolean;        // Supports extended thinking
  input: ("text" | "image")[]; // Input modalities
  cost: {
    input: number;           // Cost per 1M input tokens (USD)
    output: number;          // Cost per 1M output tokens (USD)
    cacheRead: number;       // Cost per 1M cached read tokens
    cacheWrite: number;      // Cost per 1M cache write tokens
  };
  contextWindow: number;     // Max context length in tokens
  maxTokens: number;         // Max output tokens
};

Registry Operations

Loading

import { loadProviderRegistry } from "./provider-registry.js";

const registry = loadProviderRegistry();
// Returns merged seed + user customizations

Saving

import { saveProviderRegistry } from "./provider-registry.js";

// File is written with mode 0o600 (owner read/write only)
saveProviderRegistry(registry);

Getting a Single Provider

import { getRegistryProvider } from "./provider-registry.js";

const ollama = getRegistryProvider("ollama");

Resetting to Seed

import { resetProviderToSeed, resetRegistryToSeed } from "./provider-registry.js";

// Reset one provider
resetProviderToSeed("minimax");

// Reset entire registry
resetRegistryToSeed();

Merge Strategy

When the seed version is bumped, the merge process:
1

Start with seed as base

The full seed becomes the starting point.
2

Merge existing providers

For each provider in the user’s existing file: if the provider also exists in the seed, merge models (user models take priority, new seed models are appended). If the provider is user-added (not in seed), preserve it as-is.
3

Write merged result

Write the merged result with the new seed version.
This means:
  • New providers from seed updates are automatically added
  • New models for existing providers are added alongside user-customized models
  • User-added providers are never removed
  • User model customizations (cost overrides, context window adjustments) are preserved

Adding Custom Providers

Edit ~/.argentos/provider-registry.json to add a new provider:
{
  "my-local-llm": {
    "name": "My Local LLM",
    "baseUrl": "http://localhost:8080/v1",
    "api": "openai-completions",
    "authType": "none",
    "models": [
      {
        "id": "my-model",
        "name": "My Custom Model",
        "reasoning": false,
        "input": ["text"],
        "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
        "contextWindow": 32768,
        "maxTokens": 4096
      }
    ]
  }
}
Custom providers survive seed updates because the merge strategy preserves user-added entries.

Integration with Model Router

The provider registry feeds the model router’s provider resolution. When the router needs to route a request to a specific tier (LOCAL, FAST, BALANCED, POWERFUL), it consults the registry to find available providers and their models. The registry also provides cost data that the router uses for budget-aware routing decisions.

Key Files

FileDescription
src/agents/provider-registry.tsRegistry manager (149 LOC)
src/agents/provider-registry-seed.tsSeed data with 11+ providers (752 LOC)
src/config/types.models.tsType definitions for registry entries
~/.argentos/provider-registry.jsonPersistent registry file