Skip to content

server: add usage.prompt_tokens_details.cached_tokens to json response#849

Merged
angeloskath merged 1 commit intoml-explore:mainfrom
percontation:cached_tokens
Feb 16, 2026
Merged

server: add usage.prompt_tokens_details.cached_tokens to json response#849
angeloskath merged 1 commit intoml-explore:mainfrom
percontation:cached_tokens

Conversation

@percontation
Copy link
Contributor

Adds an OpenAI compatible info field indicating how many prompt tokens from a request were found in a cache.

https://platform.openai.com/docs/api-reference/responses/object#responses-object-usage-input_tokens_details-cached_tokens

Copy link
Member

@angeloskath angeloskath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

I have one nitpick comment, how about prompt_cache_length everywhere instead of prompt_ncache and prompt_cache_count?

Adds an OpenAI chat compatible info field indicating how many prompt
tokens from a request were found in a cache.
@percontation
Copy link
Contributor Author

👍. I'll standardize on "prompt_cache_count" because it sits beside the already-existing "prompt_token_count" and "completion_token_count" variables.

Copy link
Member

@angeloskath angeloskath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great

@angeloskath angeloskath merged commit 572ada2 into ml-explore:main Feb 16, 2026
2 checks passed
@percontation percontation deleted the cached_tokens branch February 17, 2026 00:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants