Throws different error types based on failure:
cause property.import { resumeEvaluation } from "@arizeai/phoenix-client/experiments";
// Standard usage: evaluation name matches evaluator name
try {
await resumeEvaluation({
experimentId: "exp_123",
evaluators: [{
name: "correctness",
kind: "CODE",
evaluate: async ({ output, expected }) => ({
score: output === expected ? 1 : 0
})
}],
});
} catch (error) {
// Handle by error name (no instanceof needed)
if (error.name === "EvaluationFetchError") {
console.error("Failed to connect to server:", error.cause);
} else if (error.name === "EvaluationAbortedError") {
console.error("Evaluation stopped due to error:", error.cause);
} else {
console.error("Unexpected error:", error);
}
}
// Stop on first error (useful for debugging)
await resumeEvaluation({
experimentId: "exp_123",
evaluators: [myEvaluator],
stopOnFirstError: true, // Exit immediately on first failure
});
Resume incomplete evaluations for an experiment.
This function identifies which evaluations have not been completed (either missing or failed) and runs the evaluators only for those runs. This is useful for:
The function processes incomplete evaluations in batches using pagination to minimize memory usage.
Evaluation names are matched to evaluator names. For example, if you pass an evaluator with name "accuracy", it will check for and resume any runs missing the "accuracy" evaluation.
Note: Multi-output evaluators (evaluators that return an array of results) are not supported for resume operations. Each evaluator should produce a single evaluation result with a name matching the evaluator's name.