Skip to content

Observability

Both runtimes provide observability features for production deployments.

ai-lib-rust uses the tracing ecosystem:

use tracing_subscriber;
// Enable logging
tracing_subscriber::init();
// All AI-Lib operations emit structured log events
let client = AiClient::from_model("openai/gpt-4o").await?;

Log levels:

  • INFO — Request/response summaries
  • DEBUG — Protocol loading, pipeline stages
  • TRACE — Individual frames, JSONPath matches

Every request returns usage statistics:

let (response, stats) = client.chat()
.user("Hello")
.execute_with_stats()
.await?;
println!("Model: {}", stats.model);
println!("Provider: {}", stats.provider);
println!("Prompt tokens: {}", stats.prompt_tokens);
println!("Completion tokens: {}", stats.completion_tokens);
println!("Total tokens: {}", stats.total_tokens);
println!("Latency: {}ms", stats.latency_ms);
from ai_lib_python.telemetry import MetricsCollector
metrics = MetricsCollector()
client = await AiClient.builder() \
.model("openai/gpt-4o") \
.metrics(metrics) \
.build()
# After some requests...
prometheus_text = metrics.export_prometheus()

Tracked metrics:

  • ai_lib_requests_total — Request count by model/provider
  • ai_lib_request_duration_seconds — Latency histogram
  • ai_lib_tokens_total — Token usage by type
  • ai_lib_errors_total — Error count by type

Python: Distributed Tracing (OpenTelemetry)

Section titled “Python: Distributed Tracing (OpenTelemetry)”
from ai_lib_python.telemetry import Tracer
tracer = Tracer(
service_name="my-app",
endpoint="http://jaeger:4317",
)
client = await AiClient.builder() \
.model("openai/gpt-4o") \
.tracer(tracer) \
.build()

Traces include spans for:

  • Protocol loading
  • Request compilation
  • HTTP transport
  • Pipeline processing
  • Event mapping
from ai_lib_python.telemetry import HealthChecker
health = HealthChecker()
status = await health.check()
print(f"Healthy: {status.is_healthy}")
print(f"Details: {status.details}")

Collect feedback on AI responses:

from ai_lib_python.telemetry import FeedbackCollector
feedback = FeedbackCollector()
# After getting a response
feedback.record(
request_id=stats.request_id,
rating=5,
comment="Helpful response",
)

Monitor circuit breaker and rate limiter state:

// Rust
let state = client.circuit_state(); // Closed, Open, HalfOpen
let inflight = client.current_inflight();
# Python
signals = client.signals_snapshot()
print(f"Circuit: {signals.circuit_state}")
print(f"Inflight: {signals.current_inflight}")