-
Notifications
You must be signed in to change notification settings - Fork 2.1k
fix: kg answer streaming #4877
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: kg answer streaming #4877
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Summary
Streamlined answer streaming implementation by standardizing the mechanism across different LLM providers like Claude and OpenAI through the shared stream_llm_answer
utility.
- Removed custom token streaming and manual stream writes from
backend/onyx/agents/agent_search/kb_search/nodes/d1_generate_answer.py
- Eliminated unused reference results streaming code and unnecessary tokenizer logic
- Removed obsolete stream handling functions from
backend/onyx/agents/agent_search/kb_search/graph_utils.py
in favor of the shared utility - Fixed visual inconsistencies in streaming output, particularly noticeable with Claude's responses (bullet point formatting remains a Claude-specific issue)
2 files reviewed, 1 comment
Edit PR Review Bot Settings | Greptile
backend/onyx/agents/agent_search/kb_search/nodes/d1_generate_answer.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
* fix: answer streaming * fix: env vars * fix: remove duplicate
Description
Fixed answer streaming, particularly for models like claude and made it standardized with the other streamings.
Before

After

(the bullet point not being right is a claude issue)
How Has This Been Tested?
Locally, with openai and claude
Backporting (check the box to trigger backport action)
Note: You have to check that the action passes, otherwise resolve the conflicts manually and tag the patches.