-
Notifications
You must be signed in to change notification settings - Fork 20k
Description
Checked other resources
- This is a bug, not a usage question.
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- This is not related to the langchain-community package.
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Package (Required)
- langchain
- langchain-openai
- langchain-anthropic
- langchain-classic
- langchain-core
- langchain-cli
- langchain-model-profiles
- langchain-tests
- langchain-text-splitters
- langchain-chroma
- langchain-deepseek
- langchain-exa
- langchain-fireworks
- langchain-groq
- langchain-huggingface
- langchain-mistralai
- langchain-nomic
- langchain-ollama
- langchain-perplexity
- langchain-prompty
- langchain-qdrant
- langchain-xai
- Other / not sure / general
Example Code (Python)
from pydantic import BaseModel, Field
from typing import List, Optional
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
from langchain_groq import ChatGroq
class MovieRecommendation(BaseModel):
title: str = Field(description="The movie title")
genre: str = Field(description="Primary genre of the movie")
year: Optional[int] = Field(description="Release year")
similar_movies: List[str] = Field(description="List of 2-3 similar movies")
# Method 1: create_agent with Auto strategy (ProviderStrategy) - FAILS
agent = create_agent(model="groq:openai/gpt-oss-120b", response_format=MovieRecommendation)
result = await agent.ainvoke({"messages": [{"role": "user", "content": "Recommend a sci-fi movie"}]})
# Method 2: create_agent with ToolStrategy - FAILS
agent = create_agent(model="groq:openai/gpt-oss-120b", response_format=ToolStrategy(MovieRecommendation))
result = await agent.ainvoke({"messages": [{"role": "user", "content": "Recommend a sci-fi movie"}]})
# Method 3: ChatGroq.with_structured_output() - FAILS
llm = ChatGroq(model="openai/gpt-oss-120b")
structured_llm = llm.with_structured_output(MovieRecommendation)
result = await structured_llm.ainvoke("Recommend a sci-fi movie")Error Message and Stack Trace (if applicable)
**Error 1: ProviderStrategy (default) - `strict` parameter not supported**
TypeError: AsyncCompletions.create() got an unexpected keyword argument 'strict'
Full traceback:
File ".../langchain_groq/chat_models.py", line 611, in _agenerate
response = await self.async_client.create(messages=message_dicts, **params)
TypeError: AsyncCompletions.create() got an unexpected keyword argument 'strict'
**Error 2: ToolStrategy - model ignores tool calls**
BadRequestError: Error code: 400 - {'error': {'message': 'Tool choice is required, but model did not call a tool', 'type': 'invalid_request_error', 'code': 'tool_use_failed', ...}}Description
The openai/gpt-oss-120b model on Groq is incompatible with LangChain's structured output features. Both create_agent with response_format and ChatGroq.with_structured_output() fail with different errors depending on the strategy used:
-
ProviderStrategy (default): LangChain passes
strict=Trueto the Groq API, but the Groq SDK (groqpackage) does not accept this parameter for theopenai/gpt-oss-120bmodel. -
ToolStrategy: The model does not follow tool calling instructions and generates free-form text instead, causing a
tool_use_failederror from the Groq API.
What works:
groq:llama-3.3-70b-versatilewith both ProviderStrategy (auto) and ToolStrategy works correctly- Both structured output methods return the expected Pydantic model
What fails:
groq:openai/gpt-oss-120bfails with both strategies (errors shown above)
Expected behavior:
Either:
langchain-groqshould filter out thestrictparameter for models that don't support it- Or provide a clear error message indicating the model doesn't support structured output
- Or document which Groq models are compatible with structured output
System Info
System Information
OS: Darwin
OS Version: Darwin Kernel Version 24.5.0: Tue Apr 22 19:54:29 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6030
Python Version: 3.12.10 (main, Apr 8 2025, 11:35:47) [Clang 16.0.0 (clang-1600.0.26.6)]
Package Information
langchain_core: 1.1.0
langchain: 1.1.0
langsmith: 0.4.49
langchain_anthropic: 1.2.0
langchain_groq: 1.1.0
langchain_openai: 1.1.0
langgraph_sdk: 0.2.10
Other Dependencies
groq: 0.36.0
openai: 1.109.1
pydantic: 2.12.5