OpenInference JS
    Preparing search index...
    • Experimental

      A decorator factory for tracing operations in an LLM application.

      This decorator wraps class methods to automatically create OpenTelemetry spans for tracing purposes. It leverages the withSpan function internally to ensure consistent tracing behavior across the library.

      The decorator uses an optimized caching mechanism to avoid rebinding methods on every call, improving performance for frequently called methods.

      This API is experimental and may change in future versions

      Type Parameters

      Parameters

      • options: SpanTraceOptions = {}

        Configuration options for the tracing behavior

        Configuration options for span tracing in OpenInference.

        This interface defines all the available options for customizing how functions are traced, including span naming, tracer selection, span classification, and custom input/output processing. These options are used by tracing decorators and wrapper functions to control the tracing behavior.

        • Optionalattributes?: Attributes

          Base attributes to be added to every span created with these options.

          These attributes will be merged with any attributes generated by input/output processors and OpenInference semantic attributes. Base attributes are useful for adding consistent metadata like service information, version numbers, environment details, or any other static attributes that should be present on all spans.

          // Custom business context
          attributes: {
          'metadata': JSON.stringify({ tenant: 'tenant-123', feature: 'new-algorithm-enabled', request: 'mobile-app' })
          }
        • Optionalkind?:
              | OpenInferenceSpanKind
              | "LLM"
              | "CHAIN"
              | "TOOL"
              | "RETRIEVER"
              | "RERANKER"
              | "EMBEDDING"
              | "AGENT"
              | "GUARDRAIL"
              | "EVALUATOR"

          The OpenInference span kind for semantic categorization in LLM applications.

          This provides domain-specific classification for AI/ML operations, helping to organize and understand the different types of operations in an LLM workflow.

          OpenInferenceSpanKind.CHAIN
          
          - `LLM` for language model inference
          - `CHAIN` for workflow sequences
          - `AGENT` for autonomous decision-making
          - `TOOL` for external tool usage
        • Optionalname?: string

          Custom name for the span.

          If not provided, the name of the decorated function or wrapped function will be used as the span name. This is useful for providing more descriptive or standardized names for operations.

          "user-authentication", "data-processing-pipeline", "llm-inference"
          
        • OptionalopenTelemetrySpanKind?: SpanKind

          The OpenTelemetry span kind to classify the span's role in a trace.

          This determines how the span is categorized in the OpenTelemetry ecosystem and affects how tracing tools display and analyze the span.

          SpanKind.INTERNAL
          
          - `SpanKind.CLIENT` for outbound requests
          - `SpanKind.SERVER` for inbound request handling
          - `SpanKind.INTERNAL` for internal operations
        • OptionalprocessInput?: InputToAttributesFn<Fn>

          Custom function to process input arguments into span attributes.

          This allows for custom serialization and attribute extraction from function arguments. If not provided, the default input processor will be used, which safely JSON-stringifies the arguments.

          The function arguments to process

          OpenTelemetry attributes object

          processInput: (...args) => ({
          'input.value': JSON.stringify(args),
          'input.mimeType': MimeType.JSON
          })
        • OptionalprocessOutput?: OutputToAttributesFn<Fn>

          Custom function to process output values into span attributes.

          This allows for custom serialization and attribute extraction from function return values. If not provided, the default output processor will be used, which safely JSON-stringifies the result.

          The function's return value to process

          OpenTelemetry attributes object

          processOutput: (result) => ({
          'output.value': JSON.stringify(result),
          'output.mimeType': MimeType.JSON
          })
        • Optionaltracer?: Tracer

          Custom OpenTelemetry tracer instance to use for this span.

          If not provided, the global tracer will be used. This allows for using different tracers for different parts of the application or for testing purposes with mock tracers.

          const customTracer = trace.getTracer('my-service', '1.0.0');
          const options: SpanTraceOptions = { tracer: customTracer };

      Returns (originalMethod: Fn, ctx: ClassMethodDecoratorContext) => Fn

      A decorator function that can be applied to class methods

      class MyService {
      @observe({ name: "processData", kind: OpenInferenceSpanKind.LLM })
      async processData(input: string) {
      // Method implementation
      return `processed: ${input}`;
      }
      }