AI-Protocol Overview
AI-Protocol Overview
Section titled “AI-Protocol Overview”AI-Protocol is a provider-agnostic specification that standardizes interactions with AI models. It separates what runtimes need to know about a provider (configuration) from how they execute requests (code).
Core Philosophy
Section titled “Core Philosophy”All logic is operators, all configuration is protocol.
Every piece of provider-specific behavior — endpoints, authentication, parameter names, streaming formats, error codes — is declared in YAML configuration files. Runtime implementations contain zero hardcoded provider logic.
What’s in the Repository
Section titled “What’s in the Repository”ai-protocol/├── v1/│ ├── spec.yaml # Core specification (v1.1)│ ├── providers/ # 30+ provider manifests│ │ ├── openai.yaml│ │ ├── anthropic.yaml│ │ ├── gemini.yaml│ │ ├── deepseek.yaml│ │ └── ...│ └── models/ # Model instance registry│ ├── gpt.yaml│ ├── claude.yaml│ └── ...├── schemas/ # JSON Schema validation│ ├── v1.json│ └── spec.json├── dist/ # Pre-compiled JSON (generated)├── scripts/ # Build & validation tools└── examples/ # Usage examplesProvider Manifests
Section titled “Provider Manifests”Each provider has a YAML manifest declaring everything a runtime needs:
| Section | Purpose |
|---|---|
endpoint | Base URL, chat path, protocol |
auth | Authentication type, token env var, headers |
parameter_mappings | Standard → provider-specific parameter names |
streaming | Decoder format (SSE/NDJSON), event mapping rules (JSONPath) |
error_classification | HTTP status → standard error types |
retry_policy | Strategy, delays, retry conditions |
rate_limit_headers | Header names for rate limit information |
capabilities | Feature flags (streaming, tools, vision, reasoning) |
Example: Anthropic Provider
Section titled “Example: Anthropic Provider”id: anthropicprotocol_version: "1.5"endpoint: base_url: "https://api.anthropic.com/v1" chat_path: "/messages"auth: type: bearer token_env: "ANTHROPIC_API_KEY" headers: anthropic-version: "2023-06-01"parameter_mappings: temperature: "temperature" max_tokens: "max_tokens" stream: "stream" tools: "tools"streaming: decoder: format: "anthropic_sse" event_map: - match: "$.type == 'content_block_delta'" emit: "PartialContentDelta" extract: content: "$.delta.text" - match: "$.type == 'message_stop'" emit: "StreamEnd"error_classification: by_http_status: "429": "rate_limited" "401": "authentication" "529": "overloaded"capabilities: streaming: true tools: true vision: true reasoning: trueModel Registry
Section titled “Model Registry”Models are registered with provider references, capabilities, and pricing:
models: claude-3-5-sonnet: provider: anthropic model_id: "claude-3-5-sonnet-20241022" context_window: 200000 capabilities: [chat, vision, tools, streaming, reasoning] pricing: input_per_token: 0.000003 output_per_token: 0.000015Validation
Section titled “Validation”All manifests are validated against JSON Schema (2020-12) using AJV. CI pipelines enforce correctness:
npm run validate # Validate all configurationsnpm run build # Compile YAML → JSONVersioning
Section titled “Versioning”AI-Protocol uses layered versioning:
- Spec version (
v1/spec.yaml) — Schema structure version (currently 1.1) - Protocol version (in manifests) — Protocol features used (currently 1.5)
- Release version (
package.json) — SemVer for the specification package
Next Steps
Section titled “Next Steps”- Specification Details — Core spec deep dive
- Provider Manifests — How manifests work
- Model Registry — Model configuration
- Contributing Providers — Add a new provider