OpenInference JS
GitHub
Preparing search index...
@arizeai/openinference-semantic-conventions
trace/SemanticConventions
LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_INPUT
Variable LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_INPUT
Const
LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_INPUT
:
"llm.token_count.prompt_details.cache_input"
= ...
Token count for the input tokens in the prompt that were cached (in tokens)
Settings
Member Visibility
Protected
Inherited
Theme
OS
Light
Dark
GitHub
OpenInference JS
Loading...
Token count for the input tokens in the prompt that were cached (in tokens)