OpenInference JS
    Preparing search index...
    LLM_COST_PROMPT_DETAILS_CACHE_READ: "llm.cost.prompt_details.cache_read" = ...

    Cost of prompt tokens read from cache in USD