AiClient API (Python)
AiClient API
Section titled “AiClient API”Creating a Client
Section titled “Creating a Client”From Model Identifier
Section titled “From Model Identifier”from ai_lib_python import AiClient
client = await AiClient.create("anthropic/claude-3-5-sonnet")With Builder
Section titled “With Builder”client = await AiClient.builder() \ .model("openai/gpt-4o") \ .protocol_dir("./ai-protocol") \ .timeout(60) \ .build()With Resilience
Section titled “With Resilience”from ai_lib_python.resilience import ResilientConfig
config = ResilientConfig( max_retries=3, rate_limit_rps=10, circuit_breaker_threshold=5, max_inflight=50,)
client = await AiClient.builder() \ .model("openai/gpt-4o") \ .resilience(config) \ .build()ChatRequestBuilder
Section titled “ChatRequestBuilder”Fluent API for building requests:
response = await client.chat() \ .system("You are a helpful assistant") \ .user("Hello!") \ .messages([Message.user("Follow-up")]) \ .temperature(0.7) \ .max_tokens(1000) \ .top_p(0.9) \ .tools([tool_definition]) \ .execute()Response Types
Section titled “Response Types”ChatResponse
Section titled “ChatResponse”class ChatResponse: content: str # Response text tool_calls: list[ToolCall] # Function calls finish_reason: str # Completion reason usage: Usage # Token countsStreamingEvent
Section titled “StreamingEvent”class StreamingEvent: # Type checks is_content_delta: bool is_tool_call_started: bool is_partial_tool_call: bool is_stream_end: bool
# Type-safe accessors as_content_delta -> ContentDelta as_tool_call_started -> ToolCallStarted as_partial_tool_call -> PartialToolCall as_stream_end -> StreamEndCallStats
Section titled “CallStats”class CallStats: total_tokens: int prompt_tokens: int completion_tokens: int latency_ms: float model: str provider: strExecution Modes
Section titled “Execution Modes”Non-Streaming
Section titled “Non-Streaming”# Simple responseresponse = await client.chat().user("Hello").execute()
# With statisticsresponse, stats = await client.chat().user("Hello").execute_with_stats()Streaming
Section titled “Streaming”async for event in client.chat().user("Hello").stream(): if event.is_content_delta: print(event.as_content_delta.text, end="")Cancellable Stream
Section titled “Cancellable Stream”from ai_lib_python import CancelToken
token = CancelToken()
async for event in client.chat().user("Long task...").stream(cancel_token=token): if event.is_content_delta: print(event.as_content_delta.text, end="") if should_cancel: token.cancel() breakError Handling
Section titled “Error Handling”from ai_lib_python.errors import ( AiLibError, ProtocolError, TransportError, RemoteError)
try: response = await client.chat().user("Hello").execute()except RemoteError as e: print(f"Provider error: {e.error_type}") # Standard error class print(f"HTTP status: {e.status_code}")except TransportError as e: print(f"Network error: {e}")except ProtocolError as e: print(f"Protocol error: {e}")except AiLibError as e: print(f"Other error: {e}")Next Steps
Section titled “Next Steps”- Streaming Pipeline — Pipeline internals
- Resilience — Reliability patterns
- Advanced Features — Telemetry, routing, plugins