Skip to content

Enable optimizing LLM prefix caching #34

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ianmacartney opened this issue May 7, 2025 · 0 comments
Open

Enable optimizing LLM prefix caching #34

ianmacartney opened this issue May 7, 2025 · 0 comments

Comments

@ianmacartney
Copy link
Contributor

Models can cache a lot of your history. e.g.

request 1: [systemMessage, message1]
request 2: [systemMessage, message1,/* cached until here */ message2]

However let's say you pass N past messages, you could end up with:

request 1: [systemMessage, message1, ... messageN]
request 2: [systemMessage, /* cached until here */ message2, ... messageN, messageN+1]

Because the prefix caching hasn't seen [systemMessage, message2 before

If you're willing to have a more dynamic range of message history, you could instead have:

request 1: [systemMessage, message1, ... messageN]
request 2: [systemMessage, message1, ... messageN, /* cached until here */ messageN+1]

And after some extra buffer M you truncate and start the cache over:

request 1: [systemMessage, message1, ... messageN+M]
request 2: [systemMessage, /* cached until here */ messageM, ... messageN+M, messageN+M+1]

this could look like an extra parameter:

const myAgent = new Agent(components.agent, {
  contextOptions: { recentMessages: N, recentMessageCacheBuffer: M }
});

NOTE: This all gets thrown out when you use search, since the search context gets injected after the system message and before the message history. This makes a case for including search context as a system prompt at the end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant