AI-Lib AI-Lib
Specification v1.5 · 30+ Providers

The Specification
That Drives Everything.

AI-Protocol separates "what to do" from "how to do it." Provider manifests declare endpoints, auth, parameter mappings, streaming decoders, and error handling — all in YAML, all validated by JSON Schema.

What's Inside

spec.yaml

Core Specification

Defines standard parameters (temperature, max_tokens), streaming events (PartialContentDelta, ToolCallStarted), error classes (13 types), and retry policies.

providers/

30+ Provider Manifests

Each YAML file declares a provider's endpoint, auth, parameter mappings, SSE decoder config, error classification, rate limit headers, and capabilities.

models/

Model Registry

Model instances with provider references, context windows, capability flags, and per-token pricing. GPT, Claude, Gemini, DeepSeek, Qwen, and more.

schemas/

JSON Schema Validation

JSON Schema 2020-12 definitions validate every manifest. CI pipelines ensure configuration correctness. Zero runtime surprises.

A Provider Manifest

Each provider is described by a YAML manifest. It declares everything a runtime needs to communicate with the provider — endpoint, authentication, parameter mapping, streaming decoder, error handling, and capabilities.

Runtimes read these manifests and "compile" user requests into provider-specific HTTP calls. No if provider == "openai" branches anywhere.

  • Endpoint & Auth — Base URL, protocol, bearer tokens, API key headers
  • Parameter Mapping — Standard names to provider-specific JSON fields
  • Streaming Decoder — SSE/NDJSON format, JSONPath event extraction rules
  • Error Classification — HTTP status codes to 13 standard error types
id: anthropic
protocol_version: "1.5"
endpoint:
  base_url: "https://api.anthropic.com/v1"
  chat_path: "/messages"
auth:
  type: bearer
  token_env: "ANTHROPIC_API_KEY"
parameter_mappings:
  temperature: "temperature"
  max_tokens: "max_tokens"
  stream: "stream"
streaming:
  decoder:
    format: "anthropic_sse"
  event_map:
    - match: "$.type == 'content_block_delta'"
      emit: "PartialContentDelta"
error_classification:
  by_http_status:
    "429": "rate_limited"
    "401": "authentication"
capabilities:
  streaming: true
  tools: true
  vision: true

Where Protocol Fits

AI-Protocol is the foundation layer. Runtimes consume it. Applications consume runtimes.

AI-Lib Ecosystem Architecture APPLICATION RUNTIME PROTOCOL Web Apps / API Services Rust / Python Your application code AI Agents Multi-turn / Tool Calling CLI Tools Batch / Data Pipelines ai-lib-rust v0.6.6 AiClient Pipeline Transport Resilience Embeddings Cache / Batch Crates.io · tokio + reqwest · <1ms overhead ai-lib-python v0.5.0 AiClient Pipeline Transport Resilience Telemetry Routing PyPI · httpx + Pydantic v2 · async/await Load Manifests AI-Protocol v1.5 spec.yaml Core Specification providers/*.yaml 30+ Provider Manifests models/*.yaml Model Registry schemas/ JSON Schema YAML definitions → JSON compilation → Runtime consumption · Vendor neutral

Supported Providers

Each provider has a complete YAML manifest with endpoint, auth, parameter mappings, streaming decoder, error handling, and capability flags.

OpenAI
Anthropic
Google Gemini
Groq
Mistral
DeepSeek
Qwen
Cohere
Azure OpenAI
Together AI
Perplexity
NVIDIA
Fireworks AI
Replicate
OpenRouter
DeepInfra
AI21 Labs
Cerebras
Lepton AI
Zhipu GLM
Doubao
Baidu ERNIE
Tencent Hunyuan
iFlytek Spark
Moonshot
MiniMax
Baichuan
Yi / 01.AI
SiliconFlow
SenseNova

Explore the Protocol

Read the specification, browse provider manifests, or contribute a new provider.