Skip to content

Conversation

wenxi-onyx
Copy link
Member

@wenxi-onyx wenxi-onyx commented Aug 7, 2025

Description

[Provide a brief description of the changes in this PR]

How Has This Been Tested?

[Describe the tests you ran to verify your changes]

Backporting (check the box to trigger backport action)

Note: You have to check that the action passes, otherwise resolve the conflicts manually and tag the patches.

  • This PR should be backported (make sure to check that the backport attempt succeeds)
  • [Optional] Override Linear Check

Summary by cubic

Added support for GPT-5 models and improved Slack logging by removing message content from logs.

  • New Features

    • Enabled selection of "gpt-5", "gpt-5-mini", and "gpt-5-nano" models.
    • Set temperature to 1 for GPT-5 models.
  • Bug Fixes

    • Sanitized Slack payloads to prevent logging message content.

@wenxi-onyx wenxi-onyx requested a review from a team as a code owner August 7, 2025 19:12
Copy link

vercel bot commented Aug 7, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
internal-search 🔄 Building (Inspect) Visit Preview 💬 Add feedback Aug 7, 2025 7:12pm

@wenxi-onyx wenxi-onyx closed this Aug 7, 2025
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Summary

This PR adds support for GPT-5 models to the Onyx platform by implementing changes across three key areas. First, it expands the OpenAI model support by adding three GPT-5 variants (gpt-5, gpt-5-mini, gpt-5-nano) along with other next-generation models (o3, o3-mini, o4-mini) to the supported model list in llm_provider_options.py. The gpt-5 model is also made visible by default in the UI, appearing first in the visible models list, suggesting it's intended as a prominent option.

Second, the PR implements model-specific temperature handling in chat_llm.py by hardcoding the temperature parameter to 1 for all GPT-5 model variants. This conditional logic overrides the user-configured temperature setting specifically for GPT-5 models while preserving existing behavior for all other models.

Third, the PR introduces privacy-focused logging improvements in the Slack listener by implementing a sanitize_slack_payload function that removes sensitive text and blocks fields from Slack payloads before logging. This prevents user message content from appearing in debug logs while preserving other debugging information needed for operational purposes.

These changes integrate with the existing model provider architecture, following established patterns for adding new models while introducing specialized handling for the GPT-5 family. The temperature override suggests GPT-5 models may have specific API requirements or optimal performance characteristics that differ from other OpenAI models.

PR Description Notes:

  • The PR description is empty and uses only template placeholders, providing no information about the actual changes, testing approach, or reasoning behind the modifications

Confidence score: 2/5

  • This PR has significant risks due to adding unreleased models and incomplete PR documentation
  • Score reflects concerns about runtime failures when users select non-existent models and missing test coverage validation
  • Pay close attention to llm_provider_options.py and the temporary debugging code in listener.py

Context used:

Rule - Remove temporary debugging code before merging to production, especially tenant-specific debugging logs. (link)

3 files reviewed, 1 comment

Edit Code Review Bot Settings | Greptile

Comment on lines +400 to +404
temperature=(
1
if self.config.model_name in ["gpt-5", "gpt-5-mini", "gpt-5-nano"]
else self._temperature
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Consider extracting the GPT-5 model list into a constant to avoid duplication and improve maintainability. This list appears in multiple files.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cubic analysis

1 issue found across 3 files • Review in cubic

React with 👍 or 👎 to teach cubic. You can also tag @cubic-dev-ai to give feedback, ask questions, or re-run the review.


def sanitize_slack_payload(payload: dict) -> dict:
"""Remove message content from Slack payload for logging"""
sanitized = {k: v for k, v in payload.items() if k not in {"text", "blocks"}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sanitize_slack_payload fails to strip message text from nested structures, so sensitive content may still be logged (Based on your team’s feedback about avoiding personal data in logs).

Prompt for AI agents
Address the following comment on backend/onyx/onyxbot/slack/listener.py at line 575:

<comment>sanitize_slack_payload fails to strip message text from nested structures, so sensitive content may still be logged (Based on your team’s feedback about avoiding personal data in logs).</comment>

<file context>
@@ -570,6 +570,16 @@ def shutdown(self, signum: int | None, frame: FrameType | None) -&gt; None:
         sys.exit(0)
 
 
+def sanitize_slack_payload(payload: dict) -&gt; dict:
+    &quot;&quot;&quot;Remove message content from Slack payload for logging&quot;&quot;&quot;
+    sanitized = {k: v for k, v in payload.items() if k not in {&quot;text&quot;, &quot;blocks&quot;}}
+    if &quot;event&quot; in sanitized and isinstance(sanitized[&quot;event&quot;], dict):
+        sanitized[&quot;event&quot;] = {
</file context>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant