• Creates a document relevancy evaluator function.

    This function returns an evaluator that determines whether a given document text is relevant to a provided input question. The evaluator uses a classification model and a prompt template to make its determination.

    Parameters

    • args: DocumentRelevancyEvaluatorArgs

      The arguments for creating the document relevancy evaluator.

      • Optionalchoices?: ClassificationChoicesMap
      • model: LanguageModel
      • Optionalname?: string
      • OptionaloptimizationDirection?: OptimizationDirection
      • OptionalpromptTemplate?: string
      • Optionaltelemetry?: { isEnabled?: boolean; tracer?: Tracer }
        • OptionalisEnabled?: boolean

          Whether OpenTelemetry is enabled on the call. Defaults to true for visibility into the evals calls.

          true
          
        • Optionaltracer?: Tracer

          The tracer to use for the call. If not provided, the traces will get picked up by the global tracer.

    Returns Evaluator<DocumentRelevancyExample>

    An evaluator function that takes a DocumentRelevancyExample and returns a classification result indicating whether the document is relevant to the input question.

    const evaluator = createDocumentRelevancyEvaluator({ model: openai("gpt-4o-mini") });
    const result = await evaluator.evaluate({
    input: "What is the capital of France?",
    documentText: "Paris is the capital and most populous city of France.",
    });
    console.log(result.label); // "relevant" or "unrelated"