Skip to content

Function Calling

Function calling (tool use) lets AI models invoke external functions. AI-Lib provides a unified interface for tool calling across all providers that support it.

use ai_lib::ToolDefinition;
use serde_json::json;
let get_weather = ToolDefinition {
name: "get_weather".into(),
description: Some("Get current weather for a city".into()),
parameters: json!({
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["city"]
}),
};
get_weather = {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["city"],
},
}
let response = client.chat()
.user("What's the weather in Tokyo?")
.tools(vec![get_weather])
.execute()
.await?;
for call in &response.tool_calls {
println!("Function: {}", call.name);
println!("Arguments: {}", call.arguments);
// Execute the function and send results back
}
response = await client.chat() \
.user("What's the weather in Tokyo?") \
.tools([get_weather]) \
.execute()
for call in response.tool_calls:
print(f"Function: {call.name}")
print(f"Arguments: {call.arguments}")

Tool calls stream as partial events that the pipeline’s Accumulator assembles:

let mut stream = client.chat()
.user("What's the weather?")
.tools(vec![get_weather])
.stream()
.execute_stream()
.await?;
while let Some(event) = stream.next().await {
match event? {
StreamingEvent::ToolCallStarted { name, id, .. } => {
println!("Starting tool: {name} (id: {id})");
}
StreamingEvent::PartialToolCall { arguments, .. } => {
print!("{arguments}"); // Partial JSON arguments
}
StreamingEvent::ToolCallEnded { id, .. } => {
println!("\nTool call {id} complete");
}
StreamingEvent::ContentDelta { text, .. } => {
print!("{text}");
}
_ => {}
}
}
async for event in client.chat() \
.user("What's the weather?") \
.tools([get_weather]) \
.stream():
if event.is_tool_call_started:
call = event.as_tool_call_started
print(f"Starting: {call.name}")
elif event.is_partial_tool_call:
print(event.as_partial_tool_call.arguments, end="")
elif event.is_content_delta:
print(event.as_content_delta.text, end="")
  1. You define tools and pass them in the request
  2. The protocol manifest maps tools to the provider-specific format
  3. The model decides to call a tool (or respond with text)
  4. For streaming, the pipeline’s Accumulator assembles partial tool call chunks
  5. You receive unified ToolCallStarted, PartialToolCall, and ToolCallEnded events

Check the provider’s capabilities before using tools:

ProviderTool Calling
OpenAISupported
AnthropicSupported
GeminiSupported
DeepSeekSupported
GroqSupported
MistralSupported
QwenSupported

The manifest’s capabilities.tools: true flag indicates support.