Skip to content

Conversation

@DragonFSKY
Copy link
Contributor

@DragonFSKY DragonFSKY commented Nov 3, 2025

fix(server): wrap ModelContext initialization errors

Summary

Fixes uncaught exceptions during ModelContext initialization that cause 500 Internal Server Errors and session crashes when users request invalid/restricted models.

Problem

The handle_call_tool() function creates ModelContext without exception handling:

model_context = ModelContext(model_name, model_option)
# No try/except - exceptions bubble up as 500 errors
arguments["_model_context"] = model_context

Exception Sources:

  • Model validation errors (restricted models, invalid format)
  • Provider SDK failures (network issues, authentication errors)
  • Capability resolution failures

Impact:

  • Users see raw Python stack traces instead of helpful errors
  • MCP session terminates on model initialization failure
  • No opportunity to suggest alternative models

Root Cause

ModelContext initialization can raise ValueError (validation) or Exception (SDK failures), but these are not caught. The errors propagate as 500 errors with stack traces, providing poor user experience.

Solution

Wrap ModelContext initialization in try/except to convert exceptions into structured ToolExecutionError:

try:
    model_context = ModelContext(model_name, model_option)
    arguments["_model_context"] = model_context
    arguments["_resolved_model_name"] = model_name
    _ = model_context.capabilities  # Trigger validation
    logger.debug(f"Model context created for {model_name}...")
except ValueError as exc:
    # Handle validation errors (restrictions, format issues)
    logger.error(f"Model validation failed for '{model_name}': {exc}")
    error_output = ToolOutput(status="error", content=str(exc), ...)
    raise ToolExecutionError(error_output.model_dump_json()) from exc
except Exception as exc:
    # Handle unexpected errors (SDK failures, network issues)
    logger.exception("Model context setup failed for %s", model_name)
    error_output = ToolOutput(
        status="error",
        content=f"Unable to initialize model '{model_name}'. Please choose a different model.",
        ...
    )
    raise ToolExecutionError(error_output.model_dump_json()) from exc

Benefits:

  • ✅ Friendly error messages instead of stack traces
  • ✅ Session continues (not crashed)
  • ✅ Actionable feedback to users
  • ✅ Full exception details logged for debugging

Test Plan

  • All linting and tests pass
  • Manual test cases:

Test Case 1: Restricted Model

# Setup: OPENAI_ALLOWED_MODELS=gpt-4
await handle_call_tool(name="chat", arguments={"model": "gpt-5-pro", "prompt": "test"})
# Expected: ToolExecutionError with "Model gpt-5-pro not allowed" message

Test Case 2: Invalid Format

await handle_call_tool(name="chat", arguments={"model": "invalid::format", "prompt": "test"})
# Expected: Structured error with validation message

Test Case 3: Provider Failure

# Mock provider.get_capabilities() raises RuntimeError
await handle_call_tool(name="chat", arguments={"model": "gemini-2.5-pro", "prompt": "test"})
# Expected: Generic error message, full stack trace in logs

Changes

File Modified: server.py (Lines 844-878)

Before (8 lines):

model_context = ModelContext(model_name, model_option)
arguments["_model_context"] = model_context
arguments["_resolved_model_name"] = model_name
logger.debug(f"Model context created for {model_name}...")

After (35 lines):

try:
    model_context = ModelContext(model_name, model_option)
    arguments["_model_context"] = model_context
    arguments["_resolved_model_name"] = model_name
    _ = model_context.capabilities  # Trigger validation
    logger.debug(f"Model context created for {model_name}...")
except ValueError as exc:
    # Handle validation errors
    logger.error(f"Model validation failed for '{model_name}': {exc}")
    error_output = ToolOutput(...)
    raise ToolExecutionError(...) from exc
except Exception as exc:
    # Handle unexpected errors
    logger.exception("Model context setup failed for %s", model_name)
    error_output = ToolOutput(...)
    raise ToolExecutionError(...) from exc

Key Improvements:

  • Validation trigger: _ = model_context.capabilities
  • ValueError handler for validation errors
  • Generic Exception handler for SDK failures
  • Structured ToolOutput errors
  • Exception chain preservation with from exc
  • Proper logging for debugging

Related Issues

Fixes unhandled exceptions in model context initialization (main branch).

Severity: High
Priority: P1

Checklist

  • PR title follows conventional commits format
  • Activated venv and ran code quality checks: ./code_quality_checks.sh
  • Self-review completed
  • Tests added for error scenarios (recommended)
  • All unit tests passing
  • No breaking changes (pure error handling improvement)
  • Ready for review

Problem: ModelContext(capabilities) can raise ValueError/SDK errors
which bubble out of handle_call_tool as 500 responses.

Impact: Users receive raw stack traces and server skips fallback model
logic when a model is misconfigured or restricted.

Solution: Catch ModelContext exceptions, log them, and rethrow
ToolExecutionError with structured output so callers see actionable
messages.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @DragonFSKY, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the stability and user experience of the server by implementing comprehensive error handling for ModelContext initialization. Previously, failures in this critical step would lead to unhandled exceptions, resulting in 500 Internal Server Errors and session disruptions. The changes now gracefully catch these errors, provide structured and actionable feedback to users, and improve server-side debugging capabilities, ensuring a more resilient and user-friendly application.

Highlights

  • Robust Error Handling: Implemented comprehensive try/except blocks around ModelContext initialization to gracefully handle failures.
  • Prevents 500 Internal Server Errors: Catches exceptions that previously led to 500 errors and session crashes, improving application stability.
  • Structured Error Feedback: Converts caught exceptions into structured ToolExecutionError with user-friendly ToolOutput messages, enhancing user experience.
  • Specific Error Categorization: Distinguishes between ValueError (for validation issues like restricted models) and general Exception (for unexpected provider/SDK failures).
  • Enhanced Debugging: Improves server-side logging for errors and preserves the full exception chain using from exc for easier troubleshooting.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @DragonFSKY, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the robustness and user experience of the server by addressing critical error handling deficiencies in the handle_call_tool() function. Previously, failures during ModelContext initialization could lead to unhandled exceptions, resulting in 500 Internal Server Errors and session disruptions. The changes introduce structured exception handling to gracefully manage these errors, providing clear, actionable feedback to users and improving server-side debugging capabilities.

Highlights

  • Robust Error Handling for ModelContext Initialization: Implemented comprehensive try/except blocks around ModelContext initialization within the handle_call_tool() function to prevent unhandled exceptions from causing 500 Internal Server Errors.
  • Improved User Experience and Session Stability: Errors during model context creation now result in structured, user-friendly messages instead of raw Python stack traces, ensuring the MCP session remains stable and does not terminate unexpectedly.
  • Specific Error Categorization: Distinguished between ValueError (for validation issues like restricted models or invalid formats) and general Exception (for unexpected issues like provider SDK failures), providing tailored error messages and logging.
  • Structured Error Output and Debugging: All caught exceptions are converted into ToolExecutionError with a structured ToolOutput format, and the from exc syntax is used to preserve the original exception chain for easier debugging.
  • Eager Capability Validation: Added _ = model_context.capabilities to explicitly trigger model capability validation and caching immediately after context creation, ensuring early detection of configuration issues.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an excellent pull request that addresses a critical gap in error handling. Wrapping the ModelContext initialization in a try...except block to catch both specific and general exceptions significantly improves the server's robustness and the user experience. The use of ToolExecutionError and preserving the exception chain are great practices. I've added a couple of minor suggestions to enhance code consistency and maintainability.

Comment on lines +855 to +878
except ValueError as exc:
# Handle validation errors (e.g., restricted models, invalid format)
logger.error(f"Model context validation failed for '{model_name}': {exc}")
error_output = ToolOutput(
status="error",
content=str(exc),
content_type="text",
metadata={"tool_name": name, "requested_model": model_name},
)
raise ToolExecutionError(error_output.model_dump_json()) from exc
except Exception as exc:
# Handle unexpected errors (e.g., provider failures, SDK errors)
logger.exception("Model context setup failed for %s", model_name)
error_message = (
f"Unable to initialize model context for '{model_name}'. "
"Please choose a different model or check provider configuration."
)
error_output = ToolOutput(
status="error",
content=error_message,
content_type="text",
metadata={"tool_name": name, "requested_model": model_name},
)
raise ToolExecutionError(error_output.model_dump_json()) from exc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for creating a ToolOutput and raising a ToolExecutionError is duplicated in both the ValueError and Exception blocks. To improve maintainability and adhere to the Don't Repeat Yourself (DRY) principle, consider refactoring this common pattern into a private helper function. This would centralize the error response creation, making the code cleaner and easier to modify in the future.

raise ToolExecutionError(error_output.model_dump_json()) from exc
except Exception as exc:
# Handle unexpected errors (e.g., provider failures, SDK errors)
logger.exception("Model context setup failed for %s", model_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the logging style used elsewhere in this file (including the ValueError handler you've added), it's recommended to use an f-string here instead of a C-style format string.

Suggested change
logger.exception("Model context setup failed for %s", model_name)
logger.exception(f"Model context setup failed for '{model_name}'")

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses the issue of unhandled exceptions during ModelContext initialization. By introducing a try...except block, you've made the server more resilient and user-friendly, preventing crashes and providing structured error messages. The distinction between ValueError and other exceptions is well-handled. I have one suggestion to refactor the exception handling blocks to reduce code duplication and improve maintainability.

Comment on lines +855 to +878
except ValueError as exc:
# Handle validation errors (e.g., restricted models, invalid format)
logger.error(f"Model context validation failed for '{model_name}': {exc}")
error_output = ToolOutput(
status="error",
content=str(exc),
content_type="text",
metadata={"tool_name": name, "requested_model": model_name},
)
raise ToolExecutionError(error_output.model_dump_json()) from exc
except Exception as exc:
# Handle unexpected errors (e.g., provider failures, SDK errors)
logger.exception("Model context setup failed for %s", model_name)
error_message = (
f"Unable to initialize model context for '{model_name}'. "
"Please choose a different model or check provider configuration."
)
error_output = ToolOutput(
status="error",
content=error_message,
content_type="text",
metadata={"tool_name": name, "requested_model": model_name},
)
raise ToolExecutionError(error_output.model_dump_json()) from exc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The except ValueError and except Exception blocks contain duplicated logic for creating the ToolOutput and raising the ToolExecutionError. To make the code more concise and maintainable (following the DRY principle), you could combine these into a single except Exception block and use isinstance(exc, ValueError) to handle the differences in logging and error message content.

This refactoring also makes the logging style consistent by using an f-string for logger.exception, matching the logger.error call.

        except Exception as exc:
            if isinstance(exc, ValueError):
                # Handle validation errors (e.g., restricted models, invalid format)
                logger.error(f"Model context validation failed for '{model_name}': {exc}")
                error_content = str(exc)
            else:
                # Handle unexpected errors (e.g., provider failures, SDK errors)
                logger.exception(f"Model context setup failed for '{model_name}'")
                error_content = (
                    f"Unable to initialize model context for '{model_name}'. "
                    "Please choose a different model or check provider configuration."
                )

            error_output = ToolOutput(
                status="error",
                content=error_content,
                content_type="text",
                metadata={"tool_name": name, "requested_model": model_name},
            )
            raise ToolExecutionError(error_output.model_dump_json()) from exc

DragonFSKY added a commit to DragonFSKY/pal-mcp-server that referenced this pull request Dec 6, 2025
…ons#317)

- Add try/except error handling for ModelContext initialization
- Convert stack traces to friendly structured error messages
- Session no longer crashes when model initialization fails
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant