-
Notifications
You must be signed in to change notification settings - Fork 1.9k
feat: gpt5 support #5168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: gpt5 support #5168
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Summary
This PR adds support for GPT-5 models to the Onyx platform by implementing changes across three key areas. First, it expands the OpenAI model support by adding three GPT-5 variants (gpt-5
, gpt-5-mini
, gpt-5-nano
) along with other next-generation models (o3
, o3-mini
, o4-mini
) to the supported model list in llm_provider_options.py
. The gpt-5
model is also made visible by default in the UI, appearing first in the visible models list, suggesting it's intended as a prominent option.
Second, the PR implements model-specific temperature handling in chat_llm.py
by hardcoding the temperature parameter to 1 for all GPT-5 model variants. This conditional logic overrides the user-configured temperature setting specifically for GPT-5 models while preserving existing behavior for all other models.
Third, the PR introduces privacy-focused logging improvements in the Slack listener by implementing a sanitize_slack_payload
function that removes sensitive text
and blocks
fields from Slack payloads before logging. This prevents user message content from appearing in debug logs while preserving other debugging information needed for operational purposes.
These changes integrate with the existing model provider architecture, following established patterns for adding new models while introducing specialized handling for the GPT-5 family. The temperature override suggests GPT-5 models may have specific API requirements or optimal performance characteristics that differ from other OpenAI models.
PR Description Notes:
- The PR description is empty and uses only template placeholders, providing no information about the actual changes, testing approach, or reasoning behind the modifications
Confidence score: 2/5
- This PR has significant risks due to adding unreleased models and incomplete PR documentation
- Score reflects concerns about runtime failures when users select non-existent models and missing test coverage validation
- Pay close attention to
llm_provider_options.py
and the temporary debugging code inlistener.py
Context used:
Rule - Remove temporary debugging code before merging to production, especially tenant-specific debugging logs. (link)
3 files reviewed, 1 comment
temperature=( | ||
1 | ||
if self.config.model_name in ["gpt-5", "gpt-5-mini", "gpt-5-nano"] | ||
else self._temperature | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Consider extracting the GPT-5 model list into a constant to avoid duplication and improve maintainability. This list appears in multiple files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cubic analysis
1 issue found across 3 files • Review in cubic
React with 👍 or 👎 to teach cubic. You can also tag @cubic-dev-ai
to give feedback, ask questions, or re-run the review.
|
||
def sanitize_slack_payload(payload: dict) -> dict: | ||
"""Remove message content from Slack payload for logging""" | ||
sanitized = {k: v for k, v in payload.items() if k not in {"text", "blocks"}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sanitize_slack_payload fails to strip message text from nested structures, so sensitive content may still be logged (Based on your team’s feedback about avoiding personal data in logs).
Prompt for AI agents
Address the following comment on backend/onyx/onyxbot/slack/listener.py at line 575:
<comment>sanitize_slack_payload fails to strip message text from nested structures, so sensitive content may still be logged (Based on your team’s feedback about avoiding personal data in logs).</comment>
<file context>
@@ -570,6 +570,16 @@ def shutdown(self, signum: int | None, frame: FrameType | None) -> None:
sys.exit(0)
+def sanitize_slack_payload(payload: dict) -> dict:
+ """Remove message content from Slack payload for logging"""
+ sanitized = {k: v for k, v in payload.items() if k not in {"text", "blocks"}}
+ if "event" in sanitized and isinstance(sanitized["event"], dict):
+ sanitized["event"] = {
</file context>
Description
[Provide a brief description of the changes in this PR]
How Has This Been Tested?
[Describe the tests you ran to verify your changes]
Backporting (check the box to trigger backport action)
Note: You have to check that the action passes, otherwise resolve the conflicts manually and tag the patches.
Summary by cubic
Added support for GPT-5 models and improved Slack logging by removing message content from logs.
New Features
Bug Fixes