-
Notifications
You must be signed in to change notification settings - Fork 18.9k
feat(openai): (v1) support pdfs passed via url in standard format #32876
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: wip-v1.0
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for GitHub.
|
CodSpeed WallTime Performance ReportMerging #32876 will degrade performances by 12.53%Comparing
|
Benchmark | BASE |
HEAD |
Change | |
---|---|---|---|---|
❌ | test_import_time[BaseChatModel] |
494.9 ms | 565.8 ms | -12.53% |
❌ | test_import_time[ChatPromptTemplate] |
554.3 ms | 628.8 ms | -11.84% |
❌ | test_import_time[PydanticOutputParser] |
499.9 ms | 561.3 ms | -10.94% |
CodSpeed Instrumentation Performance ReportMerging #32876 will not alter performanceComparing Summary
|
Need to add back more descriptive warning / error if users pass |
This requires a small refactor.
Currently we pass all standard inputs through
convert_to_openai_data_block
in langchain-core, which returns chat completions format. BaseChatOpenAI then translates to Responses format if needed.PDFs specified via URLs are not supported by chat completions, so ideally
convert_to_openai_data_block
continues to raise an informative ValueError there.Simplest solution IMO is to enable
convert_to_openai_data_block
to return either Chat Completions or Responses format, and do the translation there instead of in BaseChatOpenAI.BaseChatOpenAI continues to accept chat completions format directly on Responses path.