OpenInference JS
GitHub
Preparing search index...
@arizeai/openinference-semantic-conventions
trace/SemanticConventions
LLM_COST_PROMPT_DETAILS_CACHE_INPUT
Variable LLM_COST_PROMPT_DETAILS_CACHE_INPUT
Const
LLM_COST_PROMPT_DETAILS_CACHE_INPUT
:
"llm.cost.prompt_details.cache_input"
= ...
Cost of input tokens in the prompt that were cached in USD
Settings
Member Visibility
Protected
Inherited
Theme
OS
Light
Dark
GitHub
OpenInference JS
Loading...
Cost of input tokens in the prompt that were cached in USD