OpenInference JS
    Preparing search index...
    LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_READ: "llm.token_count.prompt_details.cache_read" = ...

    Token count for the tokens retrieved from cache (in tokens)