Arize Phoenix TS
    Preparing search index...

    Module @arizeai/phoenix-otel

    Arize Phoenix logo
    @arizeai/phoenix-otel

    NPM Version Documentation

    A lightweight wrapper around OpenTelemetry for Node.js applications that simplifies sending traces to Arize Phoenix. @arizeai/phoenix-otel handles provider registration and OTLP export, then re-exports the full @arizeai/openinference-core helper surface from the same package path so you can register tracing and author manual spans from one import.

    Note: This package is under active development and APIs may change.

    • Simple Setup - One-line configuration with sensible defaults
    • Environment Variables - Automatic configuration from environment variables
    • Batch Processing - Built-in batch span processing for production use
    • OpenInference Helpers Included - Re-exports withSpan, traceChain, traceAgent, traceTool, observe, context setters, attribute builders, OITracer, and utility helpers
    • Provider-Swap Safe Wrappers - The re-exported OpenInference helpers resolve the default tracer when the wrapped function executes, so module-scoped wrappers continue following global provider changes
    • Agent-Friendly Local Docs - Ships curated docs and source in node_modules/@arizeai/phoenix-otel/
    npm install @arizeai/phoenix-otel
    

    The simplest way to get started is to use register() together with the built-in tracing helpers:

    import { register, traceChain } from "@arizeai/phoenix-otel";

    const provider = register({
    projectName: "my-app",
    });

    const answerQuestion = traceChain(
    async (question: string) => `Handled: ${question}`,
    { name: "answer-question" }
    );

    await answerQuestion("What is Phoenix?");
    await provider.shutdown();

    register() sets up the Phoenix exporter. The tracing helpers come from @arizeai/openinference-core, re-exported through @arizeai/phoenix-otel.

    For production use with Phoenix Cloud:

    import { register } from "@arizeai/phoenix-otel";

    register({
    projectName: "my-app",
    url: "https://app.phoenix.arize.com",
    apiKey: process.env.PHOENIX_API_KEY,
    });

    The register function automatically reads from environment variables:

    # For local Phoenix server (default)
    export PHOENIX_COLLECTOR_ENDPOINT="http://localhost:6006"

    # For Phoenix Cloud
    export PHOENIX_COLLECTOR_ENDPOINT="https://app.phoenix.arize.com"
    export PHOENIX_API_KEY="your-api-key"

    The register function accepts the following parameters:

    Parameter Type Default Description
    projectName string "default" The project name for organizing traces in Phoenix
    url string "http://localhost:6006" The URL to your Phoenix instance
    apiKey string undefined API key for Phoenix authentication
    headers Record<string, string> {} Custom headers for OTLP requests
    batch boolean true Use batch span processing (recommended for production)
    instrumentations Instrumentation[] undefined Array of OpenTelemetry instrumentations to register
    global boolean true Register the tracer provider globally
    diagLogLevel DiagLogLevel undefined Diagnostic logging level for debugging

    Automatically instrument common libraries (works best with CommonJS):

    import { register } from "@arizeai/phoenix-otel";
    import { HttpInstrumentation } from "@opentelemetry/instrumentation-http";
    import { ExpressInstrumentation } from "@opentelemetry/instrumentation-express";

    register({
    projectName: "my-express-app",
    instrumentations: [new HttpInstrumentation(), new ExpressInstrumentation()],
    });

    Note: Auto-instrumentation via the instrumentations parameter works best with CommonJS projects. ESM projects require manual instrumentation.

    For ESM projects, manually instrument libraries:

    // instrumentation.ts
    import { register, registerInstrumentations } from "@arizeai/phoenix-otel";
    import OpenAI from "openai";
    import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";

    register({
    projectName: "openai-app",
    });

    // Manual instrumentation for ESM
    const instrumentation = new OpenAIInstrumentation();
    instrumentation.manuallyInstrument(OpenAI);

    registerInstrumentations({
    instrumentations: [instrumentation],
    });
    // main.ts
    import "./instrumentation.ts";
    import OpenAI from "openai";

    const openai = new OpenAI();

    const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Hello!" }],
    });

    The package includes withSpan, traceChain, traceAgent, and traceTool for wrapping functions with OpenInference spans. Each helper automatically records inputs, outputs, errors, and span kind.

    import {
    register,
    traceAgent,
    traceChain,
    traceTool,
    withSpan,
    } from "@arizeai/phoenix-otel";

    register({ projectName: "my-app" });

    // traceTool β€” for tool calls and API lookups
    const searchDocs = traceTool(
    async (query: string) => {
    const response = await fetch(`/api/search?q=${query}`);
    return response.json();
    },
    { name: "search-docs" }
    );

    // traceChain β€” for pipeline steps and orchestration
    const summarize = traceChain(
    async (text: string) => `Summary of ${text.length} chars`,
    { name: "summarize" }
    );

    // traceAgent β€” for autonomous agent entry points
    const supportAgent = traceAgent(
    async (question: string) => {
    const docs = await searchDocs(question);
    return summarize(JSON.stringify(docs));
    },
    { name: "support-agent" }
    );

    // withSpan β€” general purpose, specify kind explicitly
    const retrieveDocs = withSpan(
    async (query: string) =>
    fetch(`/api/search?q=${query}`).then((r) => r.json()),
    { name: "retrieve-docs", kind: "RETRIEVER" }
    );

    These helpers resolve the default tracer when the wrapped function runs, so traced functions defined at module scope keep following global provider changes.

    Use processInput and processOutput when you want richer OpenInference attributes than the default JSON-serialized input.value and output.value.

    import {
    OpenInferenceSpanKind,
    getInputAttributes,
    getRetrieverAttributes,
    withSpan,
    } from "@arizeai/phoenix-otel";

    const retrieveDocs = withSpan(
    async (query: string) => [`Doc A for ${query}`, `Doc B for ${query}`],
    {
    name: "retrieve-docs",
    kind: OpenInferenceSpanKind.RETRIEVER,
    processInput: (query) => getInputAttributes(query),
    processOutput: (documents) =>
    getRetrieverAttributes({
    documents: documents.map((content, index) => ({
    id: `doc-${index}`,
    content,
    })),
    }),
    }
    );

    Propagate session IDs, user IDs, metadata, and tags to all child spans using context setters:

    import {
    context,
    register,
    setMetadata,
    setSession,
    setUser,
    traceChain,
    } from "@arizeai/phoenix-otel";

    register({ projectName: "my-app" });

    const handleQuery = traceChain(async (query: string) => `Handled: ${query}`, {
    name: "handle-query",
    });

    // All spans inside context.with() inherit session, user, and metadata
    await context.with(
    setMetadata(
    setUser(setSession(context.active(), { sessionId: "sess-123" }), {
    userId: "user-456",
    }),
    { environment: "production" }
    ),
    () => handleQuery("Hello")
    );

    Available setters: setSession, setUser, setMetadata, setTags, setAttributes, setPromptTemplate.

    If you create spans manually with a plain OpenTelemetry tracer, copy the propagated context attributes onto the span explicitly:

    import {
    context,
    getAttributesFromContext,
    register,
    trace,
    } from "@arizeai/phoenix-otel";

    register({ projectName: "my-app" });

    const tracer = trace.getTracer("manual-tracer");
    const span = tracer.startSpan("manual-span");
    span.setAttributes(getAttributesFromContext(context.active()));
    span.end();

    observe wraps class methods with tracing while preserving method this context. Use TypeScript 5+ standard decorators.

    import { OpenInferenceSpanKind, observe } from "@arizeai/phoenix-otel";

    class ChatService {
    @observe({ kind: OpenInferenceSpanKind.CHAIN })
    async runWorkflow(message: string) {
    return `processed: ${message}`;
    }

    @observe({ name: "llm-call", kind: OpenInferenceSpanKind.LLM })
    async callModel(prompt: string) {
    return `model output for: ${prompt}`;
    }
    }

    Use the attribute helpers when you want to build OpenInference-compatible span attributes directly:

    import { getLLMAttributes, trace } from "@arizeai/phoenix-otel";

    const tracer = trace.getTracer("llm-service");

    tracer.startActiveSpan("llm-inference", (span) => {
    span.setAttributes(
    getLLMAttributes({
    provider: "openai",
    modelName: "gpt-4o-mini",
    inputMessages: [{ role: "user", content: "What is Phoenix?" }],
    outputMessages: [{ role: "assistant", content: "Phoenix is..." }],
    tokenCount: { prompt: 12, completion: 44, total: 56 },
    invocationParameters: { temperature: 0.2 },
    })
    );
    span.end();
    });

    Available helpers include:

    • getLLMAttributes
    • getEmbeddingAttributes
    • getRetrieverAttributes
    • getToolAttributes
    • getMetadataAttributes
    • getInputAttributes / getOutputAttributes
    • defaultProcessInput / defaultProcessOutput

    OITracer wraps an OpenTelemetry tracer and can redact or drop sensitive OpenInference attributes before spans are written:

    import {
    OITracer,
    OpenInferenceSpanKind,
    trace,
    withSpan,
    } from "@arizeai/phoenix-otel";

    const tracer = new OITracer({
    tracer: trace.getTracer("my-service"),
    traceConfig: {
    hideInputs: true,
    hideOutputText: true,
    hideEmbeddingVectors: true,
    base64ImageMaxLength: 8_000,
    },
    });

    const safeLLMCall = withSpan(
    async (prompt: string) => `model response for ${prompt}`,
    {
    tracer,
    kind: OpenInferenceSpanKind.LLM,
    name: "safe-llm-call",
    }
    );

    Supported environment variables include:

    • OPENINFERENCE_HIDE_INPUTS
    • OPENINFERENCE_HIDE_OUTPUTS
    • OPENINFERENCE_HIDE_INPUT_MESSAGES
    • OPENINFERENCE_HIDE_OUTPUT_MESSAGES
    • OPENINFERENCE_HIDE_INPUT_IMAGES
    • OPENINFERENCE_HIDE_INPUT_TEXT
    • OPENINFERENCE_HIDE_OUTPUT_TEXT
    • OPENINFERENCE_HIDE_EMBEDDING_VECTORS
    • OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH
    • OPENINFERENCE_HIDE_PROMPTS

    For full control over attributes and timing, use the OpenTelemetry API directly:

    import { register, trace, SpanStatusCode } from "@arizeai/phoenix-otel";

    register({ projectName: "my-app" });

    const tracer = trace.getTracer("my-service");

    async function processOrder(orderId: string) {
    return tracer.startActiveSpan("process-order", async (span) => {
    try {
    span.setAttribute("order.id", orderId);
    const result = await fetchOrderDetails(orderId);
    span.setAttribute("order.status", result.status);
    return result;
    } catch (error) {
    span.recordException(error as Error);
    span.setStatus({ code: SpanStatusCode.ERROR });
    throw error;
    } finally {
    span.end();
    }
    });
    }

    The package also re-exports small utilities from @arizeai/openinference-core:

    • withSafety({ fn, onError? }) wraps a function and returns null on error
    • safelyJSONStringify(value) and safelyJSONParse(value) guard JSON operations

    Development (with debug logging):

    import { DiagLogLevel, register } from "@arizeai/phoenix-otel";

    register({
    projectName: "my-app-dev",
    url: "http://localhost:6006",
    batch: false, // Immediate span delivery for faster feedback
    diagLogLevel: DiagLogLevel.DEBUG,
    });

    Production (optimized for performance):

    import { register } from "@arizeai/phoenix-otel";

    register({
    projectName: "my-app-prod",
    url: "https://app.phoenix.arize.com",
    apiKey: process.env.PHOENIX_API_KEY,
    batch: true, // Batch processing for better performance
    });

    Add custom headers to OTLP requests:

    import { register } from "@arizeai/phoenix-otel";

    register({
    projectName: "my-app",
    url: "https://app.phoenix.arize.com",
    headers: {
    "X-Custom-Header": "custom-value",
    "X-Environment": process.env.NODE_ENV || "development",
    },
    });

    Use the provider explicitly without registering globally:

    import { register } from "@arizeai/phoenix-otel";

    const provider = register({
    projectName: "my-app",
    global: false,
    });

    // Use the provider explicitly
    const tracer = provider.getTracer("my-tracer");

    After install, a coding agent can inspect the exact versioned docs and implementation that shipped with the package:

    node_modules/@arizeai/phoenix-otel/docs/
    node_modules/@arizeai/phoenix-otel/src/
    

    Because @arizeai/phoenix-otel re-exports @arizeai/openinference-core, the dependency docs are also useful local references:

    node_modules/@arizeai/openinference-core/docs/
    node_modules/@arizeai/openinference-core/src/
    

    The Phoenix repo includes a phoenix-tracing skill that teaches coding agents (Claude Code, Cursor, etc.) how to instrument LLM applications with OpenInference tracing. Install it with:

    npx skills add Arize-ai/phoenix --skill phoenix-tracing
    

    Tracing helpers:

    import {
    observe,
    traceAgent,
    traceChain,
    traceTool,
    withSpan,
    } from "@arizeai/phoenix-otel";

    Context attribute setters:

    import {
    setAttributes,
    setMetadata,
    setPromptTemplate,
    setSession,
    setTags,
    setUser,
    } from "@arizeai/phoenix-otel";

    Attribute builders for rich span data:

    import {
    defaultProcessInput,
    defaultProcessOutput,
    getEmbeddingAttributes,
    getLLMAttributes,
    getRetrieverAttributes,
    getToolAttributes,
    } from "@arizeai/phoenix-otel";

    Redaction and utility helpers:

    import {
    OITracer,
    safelyJSONParse,
    safelyJSONStringify,
    withSafety,
    } from "@arizeai/phoenix-otel";

    The tracing helper wrappers resolve the default tracer when they run. That keeps spans attached to the current provider in experiments and in any workflow that swaps providers during process lifetime.

    Join our community to connect with thousands of AI builders:

    • 🌍 Join our Slack community
    • πŸ’‘ Ask questions and provide feedback in the #phoenix-support channel
    • 🌟 Leave a star on our GitHub
    • 🐞 Report bugs with GitHub Issues
    • 𝕏 Follow us on 𝕏
    • πŸ—ΊοΈ Check out our roadmap

    Modules

    config
    createNoOpProvider
    index
    register
    utils