-
-
Notifications
You must be signed in to change notification settings - Fork 842
feat: add comprehensive GPT-5 series model support #247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add comprehensive GPT-5 series model support #247
Conversation
- Add GPT-5, GPT-5-mini, and GPT-5-nano models to unified configuration - Implement proper thinking mode support via dynamic capability checking - Add OpenAI provider model enumeration methods for registry integration - Update tests to cover all GPT-5 models and their aliases - Fix critical bug where thinking mode was hardcoded instead of using model capabilities Breaking Changes: - None (backward compatible) New Models Available: - gpt-5 (400K context, 128K output, reasoning support) - gpt-5-mini (400K context, 128K output, efficient variant) - gpt-5-nano (400K context, fastest/cheapest variant) Aliases: - gpt5, gpt5-mini, gpt5mini, gpt5-nano, gpt5nano, nano All models support: - Extended thinking mode (reasoning tokens) - Vision capabilities - JSON mode - Function calling 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @dknedlik, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces comprehensive support for the GPT-5 series models (GPT-5, GPT-5-mini, and GPT-5-nano) by updating model configurations and enhancing the OpenAI provider. A critical bug in the supports_thinking_mode() method has been fixed, allowing for dynamic capability checking and ensuring proper extended reasoning features across all supported models.
Highlights
- New GPT-5 Model Integration: Added configurations for GPT-5, GPT-5-mini, and GPT-5-nano, including their aliases, context windows (400K), and output tokens (128K), all supporting extended thinking.
- Dynamic Thinking Mode Support: Refactored supports_thinking_mode() to dynamically check model capabilities, resolving a bug that previously prevented GPT-5-nano from utilizing extended reasoning features.
- Enhanced Model Registry Integration: Implemented get_model_configurations() and get_all_model_aliases() in the OpenAI provider for improved model management and discovery.
- Expanded Test Coverage: Introduced new unit tests for the GPT-5 models and their aliases, along with regression tests to validate the fix for the thinking mode functionality.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces comprehensive support for the GPT-5 series of models and resolves a critical bug in the supports_thinking_mode method by switching to a dynamic capability check. The changes are well-structured and include thorough test coverage for the new models and aliases. My review includes suggestions to enhance the model configurations for better flexibility and consistency, and a recommendation to refactor a test for improved maintainability.
Updating per code review comments. Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Adding max token for consistency per review comment Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
|
||
| def get_model_configurations(self) -> dict[str, ModelCapabilities]: | ||
| """Get model configurations supported by this provider. | ||
| Returns: | ||
| Dict mapping model names to their ModelCapabilities | ||
| """ | ||
| return self.SUPPORTED_MODELS.copy() | ||
|
|
||
| def get_all_model_aliases(self) -> dict[str, list[str]]: | ||
| """Get all model aliases supported by this provider. | ||
| Returns: | ||
| Dict mapping model names to their alias lists | ||
| """ | ||
| return {model_name: caps.aliases for model_name, caps in self.SUPPORTED_MODELS.items()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed when it should pick these up from the base class?
|
all of the mode_names are missing |
|
Thanks for this contribution! |
Description
Adds complete support for GPT-5 series models (GPT-5, GPT-5-mini, GPT-5-nano) with proper thinking mode capabilities, fixing a critical bug that prevented extended reasoning features from
working.
Changes Made
conf/custom_models.json)supports_thinking_mode()method - changed from hardcoded model checks to dynamic capability checkingget_model_configurations()andget_all_model_aliases()methods in OpenAI provider for proper registry integrationNew Models Available:
gpt5gpt5-mini,gpt5mini,minigpt5-nano,gpt5nano,nanoBug Fix:
The
supports_thinking_mode()method was hardcoded and missing GPT-5-nano support. Now it dynamically checks model capabilities, enabling proper extended reasoning token support for allmodels.
Testing
All testing requirements completed:
Run all linting and tests (required):