OpenInference JS
GitHub
Preparing search index...
@arizeai/openinference-semantic-conventions
trace/SemanticConventions
LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_READ
Variable LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_READ
Const
LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_READ
:
"llm.token_count.prompt_details.cache_read"
= ...
Token count for the tokens retrieved from cache (in tokens)
Settings
Member Visibility
Protected
Inherited
Theme
OS
Light
Dark
GitHub
OpenInference JS
Loading...
Token count for the tokens retrieved from cache (in tokens)