This document describes how tool/function calling is represented in OpenInference spans.
Tools available to the LLM are represented using the llm.tools
prefix with flattened attributes:
llm.tools.<index>.tool.json_schema
The json_schema
contains the complete tool definition as a JSON string, including:
{
"llm.tools.0.tool.json_schema": "{\"type\": \"function\", \"function\": {\"name\": \"get_weather\", \"description\": \"Get current weather for a location\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"City and state\"}}, \"required\": [\"location\"]}}}"
}
When an LLM generates tool calls, they are represented in the output messages:
llm.output_messages.<messageIndex>.message.tool_calls.<toolCallIndex>.tool_call.<attribute>
Where:
<messageIndex>
is the zero-based index of the message<toolCallIndex>
is the zero-based index of the tool call within the message<attribute>
is the specific tool call attributetool_call.id
: Unique identifier for the tool calltool_call.function.name
: Name of the function being calledtool_call.function.arguments
: JSON string containing the function arguments{
"llm.output_messages.0.message.role": "assistant",
"llm.output_messages.0.message.tool_calls.0.tool_call.id": "call_abc123",
"llm.output_messages.0.message.tool_calls.0.tool_call.function.name": "get_weather",
"llm.output_messages.0.message.tool_calls.0.tool_call.function.arguments": "{\"location\": \"San Francisco, CA\"}"
}
When an LLM makes multiple tool calls in a single response:
{
"llm.output_messages.0.message.role": "assistant",
"llm.output_messages.0.message.tool_calls.0.tool_call.id": "call_001",
"llm.output_messages.0.message.tool_calls.0.tool_call.function.name": "get_weather",
"llm.output_messages.0.message.tool_calls.0.tool_call.function.arguments": "{\"location\": \"New York\"}",
"llm.output_messages.0.message.tool_calls.1.tool_call.id": "call_002",
"llm.output_messages.0.message.tool_calls.1.tool_call.function.name": "get_weather",
"llm.output_messages.0.message.tool_calls.1.tool_call.function.arguments": "{\"location\": \"London\"}"
}
Tool results are typically represented as input messages with role “tool”:
{
"llm.input_messages.3.message.role": "tool",
"llm.input_messages.3.message.content": "{\"temperature\": 72, \"condition\": \"sunny\"}",
"llm.input_messages.3.message.tool_call_id": "call_abc123"
}
The message.tool_call_id
links the result back to the original tool call.
{
"llm.input_messages.0.message.role": "user",
"llm.input_messages.0.message.content": "What's the weather in Boston?"
}
{
"llm.tools.0.tool.json_schema": "{\"type\": \"function\", \"function\": {\"name\": \"get_weather\", \"description\": \"Get current weather\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\"}}}}}"
}
{
"llm.output_messages.0.message.role": "assistant",
"llm.output_messages.0.message.tool_calls.0.tool_call.id": "call_123",
"llm.output_messages.0.message.tool_calls.0.tool_call.function.name": "get_weather",
"llm.output_messages.0.message.tool_calls.0.tool_call.function.arguments": "{\"location\": \"Boston, MA\"}"
}
{
"llm.input_messages.2.message.role": "tool",
"llm.input_messages.2.message.content": "{\"temperature\": 65, \"condition\": \"cloudy\"}",
"llm.input_messages.2.message.tool_call_id": "call_123"
}
{
"llm.output_messages.0.message.role": "assistant",
"llm.output_messages.0.message.content": "The current weather in Boston is 65°F and cloudy."
}
Some implementations may use legacy attributes for function calling:
message.function_call_name
: Function name (deprecated, use tool_calls)message.function_call_arguments_json
: Function arguments (deprecated, use tool_calls)llm.function_call
: Complete function call as JSON (deprecated)New implementations should use the tool_calls
structure described above.