openinference

OpenInference TanStack AI

npm version

This package provides an OpenInference middleware for TanStack AI. It emits OpenTelemetry spans shaped according to the OpenInference specification so TanStack AI runs can be visualized in systems like Arize and Phoenix.

Installation

npm install --save @arizeai/openinference-tanstack-ai @tanstack/ai

You will also need an OpenTelemetry setup in your application. For example:

npm install --save @arizeai/phoenix-otel

or:

npm install --save @opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-proto

Install the provider adapter you plan to use with TanStack AI as well, for example:

npm install --save @tanstack/ai-openai

Usage

@arizeai/openinference-tanstack-ai exports openInferenceMiddleware, which plugs directly into TanStack AI’s middleware option.

import { chat } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";

import { openInferenceMiddleware } from "@arizeai/openinference-tanstack-ai";

const stream = chat({
  adapter: openaiText("gpt-4o-mini"),
  messages: [{ role: "user", content: "What is OpenInference?" }],
  middleware: [openInferenceMiddleware()],
});

The middleware works for both streaming and non-streaming TanStack AI calls.

const text = await chat({
  adapter: openaiText("gpt-4o-mini"),
  stream: false,
  systemPrompts: ["You are a concise technical explainer."],
  messages: [{ role: "user", content: "Explain OpenInference in one sentence." }],
  middleware: [openInferenceMiddleware()],
});

Tracer Setup

This package uses your application’s existing OpenTelemetry tracer provider and exporters. It does not export spans by itself.

[!NOTE] Your instrumentation code should run before the middleware is applied. This ensures that the tracer provider is properly configured before the middleware starts emitting spans.

The recommended quick start is to pair it with @arizeai/phoenix-otel.

import { register } from "@arizeai/phoenix-otel";

register({
  projectName: "my-tanstack-ai-app",
  endpoint: process.env["PHOENIX_COLLECTOR_ENDPOINT"] ?? "http://localhost:6006/v1/traces",
  apiKey: process.env["PHOENIX_API_KEY"],
});

If you already have a standard OpenTelemetry setup, that works as well. For example, with a local Phoenix collector, a minimal manual setup looks like this:

import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { Resource } from "@opentelemetry/resources";
import { SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { SEMRESATTRS_PROJECT_NAME } from "@arizeai/openinference-semantic-conventions";

const tracerProvider = new NodeTracerProvider({
  resource: new Resource({
    [SEMRESATTRS_PROJECT_NAME]: "my-tanstack-ai-app",
  }),
  spanProcessors: [
    new SimpleSpanProcessor(
      new OTLPTraceExporter({
        url: process.env["PHOENIX_COLLECTOR_ENDPOINT"] ?? "http://localhost:6006/v1/traces",
        headers:
          process.env["PHOENIX_API_KEY"] == null
            ? undefined
            : {
                Authorization: `Bearer ${process.env["PHOENIX_API_KEY"]}`,
              },
      }),
    ),
  ],
});

tracerProvider.register();

Custom Tracer

By default, the middleware uses the global tracer for this package. If your application already has a request-scoped or custom tracer, pass it explicitly.

import { trace } from "@opentelemetry/api";

const tracer = trace.getTracer("tanstack-ai-request");

const middleware = openInferenceMiddleware({ tracer });

This is useful when you want the middleware to participate in a specific tracer setup without relying on the global default.

What Gets Traced

The middleware emits the following span structure for a TanStack AI run:

For a tool loop, the trace will typically look like:

The AGENT span captures the top-level request and final response. The LLM spans capture provider/model metadata, input messages, output messages, tool definitions, and token counts. The TOOL spans capture tool names, arguments, outputs, and errors.

Examples

This package includes example files in examples/:

See examples/README.md for setup and run commands.

Notes