Fix Assistant OpenAI adapter to handle message content structure returned by to_hash method #952
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
While using the gem for the first time I was discovering
Langchain::Assistantand experimenting with it.While using naively I came across an issue :
Here
message_hashequalFrom there I was thinking great I can just persist
messages_hashsomewhere and then use the method@assistant.add_messagesto resume the conversationBut this happened :
resumed_hashequal thisThe
message_hashget stringified and nested into a new hashAfter some investigation I found out that this behaviour comes from the function
build_messageat/lib/langchain/assistant/llm/adapters/openai.rb:42That ignore the fact that the method
to_hash, from the same object, transformcontentinto an hash that merge text message and image urlThis PR is extracted from a monkey patch I made in my project :
I also added tests for the
#build_messagethat covers the previous behavior and the one added