-
-
Notifications
You must be signed in to change notification settings - Fork 719
Add Azure OpenAI Support with Responses API #269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @sangjin-kim-eyk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the platform's capabilities by introducing full support for Azure OpenAI. This integration leverages the Responses API to ensure compatibility with cutting-edge models like GPT-5 and GPT-5-Codex, providing users with more enterprise-grade AI options. The changes include a new provider, updated configuration, and extensive documentation to facilitate seamless adoption and troubleshooting. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for Azure OpenAI using the Responses API, which is a great addition. The implementation is comprehensive, including a new provider, extensive tests, and detailed documentation. I've found a few critical issues in server.py
related to an import error and incorrect provider registration logic that will prevent the feature from working. I've also noted some inconsistencies in the documentation regarding the AZURE_OPENAI_ENDPOINT
format and some hardcoded paths. In the provider itself, I've suggested improvements regarding the temperature constraint and extracting reasoning information from the API response. Overall, this is a solid contribution that just needs a few fixes to be ready.
server.py
Outdated
value = os.getenv(key) | ||
logger.debug(f" {key}: {'[PRESENT]' if value else '[MISSING]'}") | ||
from providers import ModelProviderRegistry | ||
from providers.azure_openai import AzureOpenAIModelProvider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an import error here. The class is named AzureOpenAIProvider
in providers/azure_openai.py
, but it's being imported as AzureOpenAIModelProvider
. This typo will cause a critical ImportError
at startup if Azure OpenAI is configured.
from providers.azure_openai import AzureOpenAIModelProvider | |
from providers.azure_openai import AzureOpenAIProvider |
server.py
Outdated
ModelProviderRegistry.register_provider(ProviderType.OPENAI, OpenAIModelProvider) | ||
if azure_key and azure_endpoint and azure_deployment: | ||
if azure_key != "your_azure_api_key_here": | ||
ModelProviderRegistry.register_provider(ProviderType.AZURE, AzureOpenAIModelProvider) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the import statement, there's a typo in the class name used for registration. It should be AzureOpenAIProvider
, not AzureOpenAIModelProvider
. This will cause a NameError
when trying to register the provider, preventing it from being used.
ModelProviderRegistry.register_provider(ProviderType.AZURE, AzureOpenAIModelProvider) | |
ModelProviderRegistry.register_provider(ProviderType.AZURE, AzureOpenAIProvider) |
server.py
Outdated
if (azure_key != "your_azure_api_key_here" and | ||
azure_endpoint != "your_azure_endpoint_here"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The checks for placeholder values for AZURE_OPENAI_API_KEY
and AZURE_OPENAI_ENDPOINT
are incorrect and will not work as intended.
- The API key placeholder in
.env.example
isyour_azure_openai_key_here
, notyour_azure_api_key_here
. - The endpoint placeholder check is for
your_azure_endpoint_here
, which doesn't match the examplehttps://your-resource.openai.azure.com/
.
This can lead to the provider being registered with placeholder values. The checks should use the exact placeholder strings from .env.example
.
if (azure_key != "your_azure_api_key_here" and | |
azure_endpoint != "your_azure_endpoint_here"): | |
if (azure_key != "your_azure_openai_key_here" and | |
azure_endpoint != "https://your-resource.openai.azure.com/"): |
# | ||
# All 4 variables below are REQUIRED for Azure OpenAI to work: | ||
AZURE_OPENAI_API_KEY=your_azure_openai_key_here | ||
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example value for AZURE_OPENAI_ENDPOINT
includes a trailing slash. This is inconsistent with the advice in docs/AZURE_OPENAI_TROUBLESHOOTING.md
(which correctly states it should be omitted) and can lead to connection errors, as the openai
library typically appends paths to the base endpoint. For consistency and to prevent potential issues, the trailing slash should be removed from all examples and documentation.
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
docs/AZURE_OPENAI_TROUBLESHOOTING.md
Outdated
- Main README: `E:\zen-mcp-server\README.md` | ||
- Development Guide: `E:\zen-mcp-server\CLAUDE.md` | ||
- Integration Tests: `E:\zen-mcp-server\tests\test_azure_openai_integration.py` | ||
- Provider Implementation: `E:\zen-mcp-server\providers\azure_openai_provider.py` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section contains hardcoded local Windows file paths (e.g., E:\zen-mcp-server\README.md
). These should be replaced with relative paths from the project root (e.g., README.md
, CLAUDE.md
) to be universally applicable for all users and environments.
- Main README: `E:\zen-mcp-server\README.md` | |
- Development Guide: `E:\zen-mcp-server\CLAUDE.md` | |
- Integration Tests: `E:\zen-mcp-server\tests\test_azure_openai_integration.py` | |
- Provider Implementation: `E:\zen-mcp-server\providers\azure_openai_provider.py` | |
- Main README: `README.md` | |
- Development Guide: `CLAUDE.md` | |
- Integration Tests: `tests/test_azure_openai_integration.py` | |
- Provider Implementation: `providers/azure_openai_provider.py` |
providers/azure_openai.py
Outdated
supports_images=True, # GPT-5 supports vision | ||
max_image_size_mb=20.0, # 20MB per OpenAI docs | ||
supports_temperature=True, | ||
temperature_constraint=TemperatureConstraint.create("fixed"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All supported Azure models are configured with a fixed temperature of 1.0. While the documentation mentions this is a requirement for gpt-5-codex
, it's possible that other models like gpt-5
or o3-mini
support variable temperatures. Applying this constraint to all models unnecessarily limits their functionality. Consider applying the fixed temperature constraint only to the models that strictly require it, and using a RangeTemperatureConstraint
for the others.
if hasattr(item, "summary") and item.summary: | ||
logger.debug(f"Reasoning summary: {item.summary}") | ||
# Optionally include reasoning in output | ||
# text_parts.append(f"[Reasoning: {item.summary}]") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic to extract and include the reasoning
summary from the API response is commented out. This information could be valuable for the user, providing insight into the model's thought process. It would be beneficial to uncomment this to include the reasoning summary in the final content.
# text_parts.append(f"[Reasoning: {item.summary}]") | |
text_parts.append(f"[Reasoning: {item.summary}]") |
…1759545395674 Add Claude Code GitHub Workflow
Fixed 4 critical issues identified in Azure OpenAI integration: 1. Import error: Changed AzureOpenAIModelProvider to AzureOpenAIProvider - Updated import statement in server.py (line 415) - Updated provider registration in server.py (line 514) 2. Placeholder value checks: Aligned with .env.example values - Changed azure_key check from "your_azure_api_key_here" to "your_azure_openai_key_here" - Changed azure_endpoint check from "your_azure_endpoint_here" to "https://your-resource.openai.azure.com/" - Applied fixes in both validation sections (lines 462-463, 513-514) 3. Temperature constraints: Fixed contradictory settings across models - gpt-5-codex: Set supports_temperature=False (requires fixed temperature=1.0 per docs) - o3-mini: Set supports_temperature=False (reasoning model requires temperature=1.0) - gpt-5/gpt-5-mini/gpt-5-nano/gpt-4.1: Changed to range constraint (supports variable temperature) - Resolves contradiction where supports_temperature=True but constraint was "fixed" 4. Documentation: Removed hardcoded Windows file paths - Changed absolute paths (E:\zen-mcp-server\...) to relative paths - Updated provider implementation reference from azure_openai_provider.py to azure_openai.py All syntax checks passed. Changes ready for testing. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Updated tests to match the corrected temperature constraint implementation: - test_get_capabilities_gpt5: Now expects RangeTemperatureConstraint (0.0-2.0) instead of fixed temperature. GPT-5 supports variable temperature. - test_get_capabilities_gpt5_codex: Now expects supports_temperature=False with FixedTemperatureConstraint(1.0). GPT-5-Codex requires temperature=1.0. These changes align with Gemini bot's review feedback: "Apply fixed temperature constraint only to models that strictly require it, and use RangeTemperatureConstraint for the others." All 27 Azure OpenAI provider tests now pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Automatically generated by python-semantic-release
Thanks for working on this! Unfortunately there's too much noise in this implementation, plus it's subclassing the wrong base class. I'll push something soon and would appreciate if you can give that a shot. |
Summary
Adds Azure OpenAI integration using Responses API, supporting GPT-5
and GPT-5-Codex models.
Features
O3-Mini, GPT-4.1
Testing
Files Changed
providers/azure_openai.py
(466 lines)tests/test_azure_openai_provider.py
(836 lines)docs/AZURE_OPENAI_TROUBLESHOOTING.md
.env.example
,README.md
,server.py
,registry.py
Fixes #265