OpenInference JS
GitHub
Preparing search index...
@arizeai/openinference-semantic-conventions
trace/SemanticConventions
LLM_COST_PROMPT_DETAILS_CACHE_WRITE
Variable LLM_COST_PROMPT_DETAILS_CACHE_WRITE
Const
LLM_COST_PROMPT_DETAILS_CACHE_WRITE
:
"llm.cost.prompt_details.cache_write"
= ...
Cost of prompt tokens written to cache in USD
Settings
Member Visibility
Protected
Inherited
Theme
OS
Light
Dark
GitHub
OpenInference JS
Loading...
Cost of prompt tokens written to cache in USD