Skip to content

Conversation

@Davidyz
Copy link
Contributor

@Davidyz Davidyz commented Nov 7, 2025

Description

Add an optional parse_extra handler that parses extra (non-standard) fields in the OpenAI chat/completions responses. This can be used by downstream adapters (openrouter, gemini, deepseek) to render their in-house reasoning format. This handler will be defined in either the downstream adapter (deepseek, gemini, etc.) or the user config (when they use extend to customise the adapter), and will only be called when there's a non-nil extra field.

Related Issue(s)

Compared to #1938, this is a more flexible and less invasive design that can be made to work with more OpenAI-based APIs.
If this PR is accepted, I'll update #2306 to use this too.

Checklist

  • I've read the contributing guidelines and have adhered to them in this PR
  • I've added test coverage for this fix/feature
  • I've run make all to ensure docs are generated, tests pass and my formatting is applied
  • (optional) I've updated CodeCompanion.has in the init.lua file for my new feature
  • (optional) I've updated the README and/or relevant docs pages

@Davidyz
Copy link
Contributor Author

Davidyz commented Nov 7, 2025

This is still WIP (needs tests, typing and better docs), but @olimorris would you accept this as an alternative to #1938?

@SDGLBL, since you wrote the original PR, I'd love to know your thoughts on this too (there's an example snippet in the docs about how to configure this for the openrouter format).

@olimorris
Copy link
Owner

Totally down for this. Be a welcome addition.

@Davidyz
Copy link
Contributor Author

Davidyz commented Nov 8, 2025

Nice. I'll also try to refactor the deepseek adapter to use this design. If that works, we'd automatically have unit tests for parse_extra.

On the deepseek documentation, they're using the openai python SDK with the /chat/completion endpoint, which is not the one that they claim to be openai-compatible (/v1/chat/completion). I think we can reasonably assume that the former endpoint (same as the one used by the cc deespeek adapter) is also openai compatible, just with the extra reasoning_content field.

@Davidyz
Copy link
Contributor Author

Davidyz commented Nov 8, 2025

The tests are failing because the openai chat_output handler didn't use tool.index from the response, but rather the 1-based iteration variable when non-streaming. When I change this to _index = tool_index, many other tests failed (that is, tests for other openai-based adapters). I looked up the openai docs and it seems like index isn't part of the non-stream tool_calls response, so I'll just change the index in the mocked response here.

@Davidyz Davidyz force-pushed the feat/extra_handler branch from e7ccab1 to 7a77871 Compare November 8, 2025 08:32
@Davidyz Davidyz force-pushed the feat/extra_handler branch from 6035eb5 to 42474d4 Compare November 9, 2025 02:40
@Davidyz Davidyz marked this pull request as ready for review November 9, 2025 02:42
@olimorris olimorris added the P3 Low impact, low urgency label Nov 9, 2025
@Davidyz Davidyz mentioned this pull request Nov 9, 2025
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

P3 Low impact, low urgency

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants