-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Open
Description
Description
The official documentation for caching https://ai-sdk.dev/docs/advanced/caching uses wrapLanguageModel to intercept doGenerate call. However, doGenerate doesn't validate the object with json schema. Therefore, if doGenerate returns a AI response with invalid json, it's still being cached and the generateObject will fail consistently
AI SDK Version
export const cacheMiddleware: LanguageModelV2Middleware = {
wrapGenerate: async ({ doGenerate, params }) => {
const cacheKey = JSON.stringify(params);
const cached = (await redis.get(cacheKey)) as Awaited<
ReturnType<LanguageModelV2['doGenerate']>
> | null;
if (cached !== null) {
return {
...cached,
response: {
...cached.response,
timestamp: cached?.response?.timestamp
? new Date(cached?.response?.timestamp)
: undefined,
},
};
}
const result = await doGenerate();
redis.set(cacheKey, result);
return result;
},
}
This happens on all versions of ai sdk
Code of Conduct
- I agree to follow this project's Code of Conduct