OpenInference JS
    Preparing search index...
    LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_INPUT: "llm.token_count.prompt_details.cache_input" = ...

    Token count for the input tokens in the prompt that were cached (in tokens)