OpenInference JS
    Preparing search index...
    LLM_COST_PROMPT_DETAILS_CACHE_WRITE: "llm.cost.prompt_details.cache_write" = ...

    Cost of prompt tokens written to cache in USD