@arizeai/openinference-core is the shared tracing foundation for OpenInference JS packages. It provides:
session.id, user.id, metadata, tags, prompt template)withSpan, traceChain, traceAgent, traceTool)@observe)OITracer trace confignpm install @arizeai/openinference-core
# Only needed if you want to run the Quick Start example in this README:
npm install @arizeai/openinference-semantic-conventions @opentelemetry/sdk-trace-node @opentelemetry/resources
This example exports spans to stdout and sets an OpenInference project name.
import {
OpenInferenceSpanKind,
SEMRESATTRS_PROJECT_NAME,
} from "@arizeai/openinference-semantic-conventions";
import {
ConsoleSpanExporter,
NodeTracerProvider,
SimpleSpanProcessor,
} from "@opentelemetry/sdk-trace-node";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { withSpan } from "@arizeai/openinference-core";
const provider = new NodeTracerProvider({
resource: resourceFromAttributes({
[SEMRESATTRS_PROJECT_NAME]: "openinference-core-demo",
}),
spanProcessors: [new SimpleSpanProcessor(new ConsoleSpanExporter())],
});
provider.register();
const answerQuestion = withSpan(
async (question: string) => {
return `Answer: ${question}`;
},
{
name: "answer-question",
kind: OpenInferenceSpanKind.CHAIN,
},
);
async function main() {
const answer = await answerQuestion("What is OpenInference?");
console.log(answer);
}
void main();
Each setter returns a new OpenTelemetry context. Compose them to propagate request-level attributes:
setSession(context, { sessionId })setUser(context, { userId })setMetadata(context, metadataObject)setTags(context, string[])setPromptTemplate(context, { template, variables?, version? })setAttributes(context, attributes)import { context } from "@opentelemetry/api";
import {
setAttributes,
setMetadata,
setPromptTemplate,
setSession,
setTags,
setUser,
} from "@arizeai/openinference-core";
let ctx = context.active();
ctx = setSession(ctx, { sessionId: "sess-42" });
ctx = setUser(ctx, { userId: "user-7" });
ctx = setMetadata(ctx, { tenant: "acme", environment: "prod" });
ctx = setTags(ctx, ["support", "priority-high"]);
ctx = setPromptTemplate(ctx, {
template: "Answer using docs about {topic}",
variables: { topic: "billing" },
version: "v3",
});
ctx = setAttributes(ctx, { "app.request_id": "req-123" });
context.with(ctx, async () => {
// spans started in this context by openinference-core wrappers
// include these propagated attributes automatically
});
If you create spans manually with a plain OpenTelemetry tracer, apply propagated attributes explicitly:
import { context, trace } from "@opentelemetry/api";
import { getAttributesFromContext } from "@arizeai/openinference-core";
const tracer = trace.getTracer("manual-tracer");
const span = tracer.startSpan("manual-span");
span.setAttributes(getAttributesFromContext(context.active()));
span.end();
withSpanimport { OpenInferenceSpanKind } from "@arizeai/openinference-semantic-conventions";
import { withSpan } from "@arizeai/openinference-core";
const retrieve = withSpan(
async (query: string) => {
return [`Document for ${query}`];
},
{
name: "retrieve-documents",
kind: OpenInferenceSpanKind.RETRIEVER,
},
);
traceChain, traceAgent, traceToolThese wrappers call withSpan and set kind automatically.
import { traceAgent, traceChain, traceTool } from "@arizeai/openinference-core";
const tracedChain = traceChain(async (q: string) => `chain result: ${q}`, {
name: "rag-chain",
});
const tracedTool = traceTool(async (city: string) => ({ temp: 72, city }), {
name: "weather-tool",
});
const tracedAgent = traceAgent(
async (q: string) => {
const toolResult = await tracedTool("seattle");
return tracedChain(`${q} (${toolResult.temp}F)`);
},
{ name: "qa-agent" },
);
import { getInputAttributes, getRetrieverAttributes, withSpan } from "@arizeai/openinference-core";
const retriever = withSpan(async (query: string) => [`Doc A for ${query}`, `Doc B for ${query}`], {
name: "retriever",
kind: "RETRIEVER",
processInput: (query) => getInputAttributes(query),
processOutput: (documents) =>
getRetrieverAttributes({
documents: documents.map((content, i) => ({
id: `doc-${i}`,
content,
})),
}),
});
@observe)observe wraps class methods with tracing and preserves method this context.
Use TypeScript 5+ standard decorators when applying @observe.
import { OpenInferenceSpanKind } from "@arizeai/openinference-semantic-conventions";
import { observe } from "@arizeai/openinference-core";
class ChatService {
@observe({ kind: OpenInferenceSpanKind.CHAIN })
async runWorkflow(message: string) {
return `processed: ${message}`;
}
@observe({ name: "llm-call", kind: OpenInferenceSpanKind.LLM })
async callModel(prompt: string) {
return `model output for: ${prompt}`;
}
}
Use these helpers to generate OpenInference-compatible attributes and attach them to spans:
getLLMAttributes({ provider, modelName, inputMessages, outputMessages, tokenCount, tools, ... })getEmbeddingAttributes({ modelName, embeddings })getRetrieverAttributes({ documents })getToolAttributes({ name, description?, parameters })getMetadataAttributes(metadataObject)getInputAttributes(input) / getOutputAttributes(output)defaultProcessInput(...args) / defaultProcessOutput(result)Example:
import { trace } from "@opentelemetry/api";
import { getLLMAttributes } from "@arizeai/openinference-core";
const tracer = trace.getTracer("llm-service");
tracer.startActiveSpan("llm-inference", (span) => {
span.setAttributes(
getLLMAttributes({
provider: "openai",
modelName: "gpt-4o-mini",
inputMessages: [{ role: "user", content: "What is OpenInference?" }],
outputMessages: [{ role: "assistant", content: "OpenInference is..." }],
tokenCount: { prompt: 12, completion: 44, total: 56 },
invocationParameters: { temperature: 0.2 },
}),
);
span.end();
});
OITracer)OITracer wraps an OpenTelemetry tracer and can redact or drop sensitive attributes before writing spans:
import { trace } from "@opentelemetry/api";
import { OpenInferenceSpanKind } from "@arizeai/openinference-semantic-conventions";
import { OITracer, withSpan } from "@arizeai/openinference-core";
const tracer = new OITracer({
tracer: trace.getTracer("my-service"),
traceConfig: {
hideInputs: true,
hideOutputText: true,
hideEmbeddingVectors: true,
base64ImageMaxLength: 8_000,
},
});
const traced = withSpan(async (prompt: string) => `model response for ${prompt}`, {
tracer,
kind: OpenInferenceSpanKind.LLM,
name: "safe-llm-call",
});
You can also configure masking with environment variables:
OPENINFERENCE_HIDE_INPUTSOPENINFERENCE_HIDE_OUTPUTSOPENINFERENCE_HIDE_INPUT_MESSAGESOPENINFERENCE_HIDE_OUTPUT_MESSAGESOPENINFERENCE_HIDE_INPUT_IMAGESOPENINFERENCE_HIDE_INPUT_TEXTOPENINFERENCE_HIDE_OUTPUT_TEXTOPENINFERENCE_HIDE_EMBEDDING_VECTORSOPENINFERENCE_BASE64_IMAGE_MAX_LENGTHOPENINFERENCE_HIDE_PROMPTSwithSafety({ fn, onError? }): wraps a function and returns null on errorsafelyJSONStringify(value) / safelyJSONParse(value): guarded JSON operationsOnce you've installed the openinference-core package, you already have the full openinference-core documentation and source code available locally inside node_modules. Your coding agent can read these directly -- no internet access required.
node_modules/@arizeai/openinference-core/src/ # Full source code organized by module
node_modules/@arizeai/openinference-core/docs/ # Official documentation with examples
This means your agent can look up accurate API signatures, implementations, and usage examples directly from the installed package -- ensuring it always uses the version of the SDK that's actually installed in your project.