Skip to content

Conversation

Subash-Mohan
Copy link
Contributor

This pull request introduces support for the gpt-image-1 model in the image generation tool and updates related configurations, implementations, and tests.

Configuration and Dependency Updates:

  • Updated the default output format for image generation to b64_json instead of url in tool_configs.py.
  • Updated the litellm library to version 1.72.2 in backend/requirements/default.txt.

[Provide a brief description of the changes in this PR]

How Has This Been Tested?

Tested by generating image from UI.

[Describe the tests you ran to verify your changes]

Backporting (check the box to trigger backport action)

Note: You have to check that the action passes, otherwise resolve the conflicts manually and tag the patches.

  • This PR should be backported (make sure to check that the backport attempt succeeds)
  • [Optional] Override Linear Check

@Subash-Mohan Subash-Mohan requested a review from a team as a code owner June 10, 2025 03:44
Copy link

vercel bot commented Jun 10, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
internal-search ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 10, 2025 3:45am

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

This PR integrates OpenAI's gpt-image-1 model support into the image generation tool, changing default behaviors and ensuring compatibility across the system.

  • Changed default model from 'dall-e-3' to 'gpt-image-1' and enforced base64 output format in backend/onyx/tools/tool_constructor.py and configs/tool_configs.py
  • Updated litellm to v1.72.2 with improved rate limiting and 2x higher RPS using aiohttp transport
  • Added validation in backend/onyx/tools/tool_implementations/images/image_generation_tool.py to prevent URL format with gpt-image-1
  • Introduced comprehensive test coverage in backend/tests/integration/tests/tools/test_image_generation_tool.py
  • Missing documentation updates for the new model capabilities and configuration options

6 files reviewed, 2 comments
Edit PR Review Bot Settings | Greptile

return LLMConfig(
model_provider=llm.config.model_provider,
model_name="dall-e-3",
model_name="gpt-image-1",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Consider adding a constant for 'gpt-image-1' to avoid magic strings, similar to how AZURE_DALLE_DEPLOYMENT_NAME is used

Comment on lines +75 to +77
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
pytest.skip("OPENAI_API_KEY environment variable not set")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: test_dalle3_with_base64_format doesn't use the fixture pattern established by other tests, leading to code duplication

@Weves Weves merged commit 70baecb into main Jun 10, 2025
10 of 11 checks passed
@Weves Weves deleted the enhancement/gpt4o-image-gen-support branch June 10, 2025 15:52
Weves pushed a commit that referenced this pull request Jun 19, 2025
* initial model switching changes

* Update image generation output format and revise prompt handling

* Add validation for output format in ImageGenerationTool and implement tests

---------

Co-authored-by: Subash <subash@onyx.app>
AnkitTukatek pushed a commit to TukaTek/onyx that referenced this pull request Sep 23, 2025
* initial model switching changes

* Update image generation output format and revise prompt handling

* Add validation for output format in ImageGenerationTool and implement tests

---------

Co-authored-by: Subash <subash@onyx.app>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants