OpenInference JS
    Preparing search index...
    LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_WRITE: "llm.token_count.prompt_details.cache_write" = ...

    Token count for the tokens written to cache (in tokens)