Skip to content

Conversation

@flovilmart
Copy link
Contributor

@flovilmart flovilmart commented Oct 21, 2025

Description

Not sure why, but tools are disabled for mistral adapter.
As an avid Mistral and Codestral user, I wanted to remediate this!

I'm not sure what led to the tools being disabled, they were flaky in some old versions, but my surface testing seems to be working as needed.

Checklist

  • I've read the contributing guidelines and have adhered to them in this PR
  • I've added test coverage for this fix/feature
  • I've run make all to ensure docs are generated, tests pass and my formatting is applied
  • (optional) I've updated CodeCompanion.has in the init.lua file for my new feature
  • (optional) I've updated the README and/or relevant docs pages

@olimorris olimorris added the P4 Negligible impact and urgency label Oct 21, 2025
@olimorris
Copy link
Owner

Thanks for the PR. There's a couple of things this needs before I can merge:

  1. Test coverage - We need some tool specific tests and stubs. I think the deepseek adapter's tests is the best example to follow. You will need to turn debug logging on to get the streamed and non-streamed output from the logs.
  2. Schema updates - Based on this page, it also looks like some models which are included in the Mistral adapter don't support function calling. So we'll need to conditionally disable it for them. I did that in the OpenAI Responses adapter with has_function_calling.

I don't recall tool use ever being "disabled", we just never had anyone to take the time to add support for them so appreciate your help here.

@flovilmart
Copy link
Contributor Author

Sounds good @olimorris ! I’ll have a look at this, bear with me this is my first contribution to any lua/vim plugins 😅

@olimorris
Copy link
Owner

@flovilmart I'm here to help. Fire any questions my way. I just really appreciate your taking the time to contribute.

Btw, the CONTRIBUTING.md file has a section about using memory. It's a much faster way of getting an LLM up to speed on how things work in CodeCompanion.

@flovilmart flovilmart marked this pull request as draft October 21, 2025 22:32
@flovilmart
Copy link
Contributor Author

@olimorris thank you!
I am almost done with the basic use cases.

There are some shenanigan behaviours on failed tool calls that don't always get back to the adapter and mistral really don't like not having an assistant response between tool and user calls, but I'll dig later.

{"object":"error","message":"Unexpected role 'user' after role 'tool'","type":"invalid_request_message_order","param":null,"code":"3230"}
[ERROR] 2025-10-23 09:29:24
Error: {"object":"error","message":"Unexpected role 'user' after role 'tool'","type":"invalid_request_message_order","param":null,"code":"3230"}

My current question is more around to create the stubs for the HTTP calls - is there a way to make the spec generate the stubs with the currently implemented adapter? Or to add the weather adapter to the current CC instance?

@olimorris
Copy link
Owner

You can add the weather adapter to your config to see the output. Add this to the tools section in the CodeCompanion repo config your working on:

["weather"] = {
  callback = vim.fn.getcwd() .. "/tests/strategies/chat/tools/catalog/stubs/weather.lua",
  description = "Get the latest weather",
},

Also, make sure you have logs set to DEBUG.

You can then do "What's the @{weather} like in Paris?" and you should get a response. I expect the handling of the streamed response is where the difference to OpenAI may lie.

@flovilmart
Copy link
Contributor Author

Thank you!
This is the exact route I explored, it seems to be working.

I removed

I expect the handling of the streamed response is where the difference to OpenAI may lie.
indeed, extra arguments are invalid for mistral, so I had to remove _index from the payloads.

@flovilmart flovilmart marked this pull request as ready for review October 23, 2025 17:01
@olimorris
Copy link
Owner

Killed it, @flovilmart 👏🏼. Great job. That works brilliantly in my testing. I've updated the docs to show Mistral compatibility.

Regarding your comment above, if we don't need that functionality, I suggest we remove it.

@flovilmart
Copy link
Contributor Author

flovilmart commented Oct 23, 2025

Regarding your comment above, if we don't need that functionality, I suggest we remove it.

Let me remove then!

@flovilmart
Copy link
Contributor Author

@olimorris I have another question, mistral require all tool errors being forwarded to it, otherwise it may cause this:

Error: {"object":"error","message":"Unexpected role 'user' after role 'tool'","type":"invalid_request_message_order","param":null,"code":"3230"}

I've countered locally by setting:

        strategies = {
          chat = {
            adapter = "mistral",
            tools = {
              opts = {
                auto_submit_errors = true, -- Send any errors to the LLM automatically?
              },
            }
          },

But it would be better if that would be on by default using the adapter.
Any pointer?

@olimorris
Copy link
Owner

That's strange because by default, CodeCompanion adds all error messages to the message stack. Even if there's an error in how the tool handles errors, we capture it and let the LLM know there was an internal error (source).

Simply put, this can't happen from inside CodeCompanion's tool orchestration layer.

All auto_submit_errors does is save the user from pressing <CR>.

@flovilmart
Copy link
Contributor Author

@olimorris thanks for the pointer in the orchestrator! I'll try to reproduce and see how that goes!

@flovilmart
Copy link
Contributor Author

ex:

[INFO] 2025-10-24 10:14:28
Chat request started
[INFO] 2025-10-24 10:14:28
Request body file: /var/folders/8n/5xd6mw6d3174bpk02tb303ph0000gr/T/nvim.florentvilmart/rakiJH/0.json
[DEBUG] 2025-10-24 10:14:28
Request:
{ "-sSL", "-D", "/tmp/plenary_curl_1a5b4691.headers", "-X", "POST", "-H", "Content-Type: application/json", "-H", "Authorization: Bearer 13XwzCM925qEgWSiE32C3wB5FXgnqvvw", "-d", "@/var/folders/8n/5xd6mw6d3174bpk02tb303ph0000gr/T/nvim.florentvilmart/rakiJH/0.json", "--retry", "3", "--retry-delay", "1", "--keepalive-time", "60", "--connect-timeout", "10", "--tcp-nodelay", "--no-buffer", "https://api.mistral.ai/v1/chat/completions" }
[DEBUG] 2025-10-24 10:14:28
Output data:
data: {"id":"d027547c038c4b7c989137889e5ab5e4","object":"chat.completion.chunk","created":1761315268,"model":"mistral-small-latest","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}
[DEBUG] 2025-10-24 10:14:28
Output data:
data: {"id":"d027547c038c4b7c989137889e5ab5e4","object":"chat.completion.chunk","created":1761315268,"model":"mistral-small-latest","choices":[{"index":0,"delta":{"tool_calls":[{"id":"2Y45IlNJq","function":{"name":"search_web","arguments":"{\"query\": \"weather in Montreal\", \"domains\": []}"},"index":0}]},"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":1871,"total_tokens":1890,"completion_tokens":19},"p":"abcdefghijklmnopqrstuvwxyz01234567"}
[DEBUG] 2025-10-24 10:14:28
Output data:
data: [DONE]
[INFO] 2025-10-24 10:14:28
[Edit Tracker] Initialized edit tracking for chat 5901038
[INFO] 2025-10-24 10:14:28
[Edit Tracker] Starting tool monitoring for: search_web
[DEBUG] 2025-10-24 10:14:28
[Tools] strategies.chat.tools.catalog.search_web identified
[INFO] 2025-10-24 10:14:28
[Tool System] Initiated
[DEBUG] 2025-10-24 10:14:28
Orchestrator:execute - `search_web` tool
[INFO] 2025-10-24 10:14:28
[Edit Tracker] Starting tool monitoring for: search_web
[DEBUG] 2025-10-24 10:14:28
Runner:setup 1
[DEBUG] 2025-10-24 10:14:28
Args: {
  domains = {},
  query = "weather in Montreal"
}
[DEBUG] 2025-10-24 10:14:28
Runner:run
[INFO] 2025-10-24 10:14:28
Request body file: /var/folders/8n/5xd6mw6d3174bpk02tb303ph0000gr/T/nvim.florentvilmart/rakiJH/1.json
[DEBUG] 2025-10-24 10:14:28
Request:
{ "-sSL", "-D", "/tmp/plenary_curl_fb2f4457.headers", "--compressed", "-X", "POST", "-H", "Authorization: Bearer TAVILY_API_KEY", "-H", "Content-Type: application/json", "-d", "@/var/folders/8n/5xd6mw6d3174bpk02tb303ph0000gr/T/nvim.florentvilmart/rakiJH/1.json", "--retry", "3", "--retry-delay", "1", "--keepalive-time", "60", "--connect-timeout", "10", "https://api.tavily.com/search" }
[DEBUG] 2025-10-24 10:14:29
Output data:
{
  body = '{\n    "detail": {\n        "error": "Unauthorized: missing or invalid API key."\n    }\n}',
  exit = 0,
  headers = { "server: awselb/2.0", "date: Fri, 24 Oct 2025 14:14:29 GMT", "content-length: 87", "content-type: application/json" },
  status = 401
}
[ERROR] 2025-10-24 10:14:29
[Search Web Tool] Error searching for `weather in Montreal`
[DEBUG] 2025-10-24 10:14:29
Orchestrator:error
[INFO] 2025-10-24 10:14:29
[Edit Tracker] Finishing monitoring and detecting changes for tool: search_web (success=false)
[DEBUG] 2025-10-24 10:14:29
[Search Web Tool] Error output: { "Error searching for `weather in Montreal`\nError 401 - table: 0x0102cd54f8" }
[DEBUG] 2025-10-24 10:14:29
Tool output: {
  ["function"] = {
    arguments = '{"query": "weather in Montreal", "domains": []}',
    name = "search_web"
  },
  id = "2Y45IlNJq"
}
[DEBUG] 2025-10-24 10:14:29
Orchestrator:execute - Queue empty
[DEBUG] 2025-10-24 10:14:29
Orchestrator:close
[ERROR] 2025-10-24 10:14:32
[Search Web Tool] Error searching for `weather in Montreal`
[INFO] 2025-10-24 10:14:32
Chat request finished
[INFO] 2025-10-24 10:14:32
[Tools] Completed

Maybe I should open another issue on that?

@olimorris
Copy link
Owner

"Unauthorized: missing or invalid API key."

This suggests something is wrong with your config rather than with CodeCompanion. Also, are we conflating other issues in this PR? I was happy that Mistral and tool calling was working fine, so thought we could merge this PR?

@flovilmart
Copy link
Contributor Author

@olimorris, yes mistral tool calling is working, let's merge!
I'll open another issue.

@flovilmart
Copy link
Contributor Author

@olimorris actually this behavior is specific to mistral. All errors have to be reported to the agent upstream.
I checked with copilot and it doesn't break if we have: user message following a tool call.

But with Mistral, you always need to report all tool call success and errors to the agent - otherwise it breaks with the following:

[chat::_submit_http] Error: {
  body = "{\"object\":\"error\",\"message\":\"Unexpected role 'user' after role 'tool'\",\"type\":\"invalid_request_message_order\",\"param\":null,\"code\":\"3230\"}",
  exit = 0,
  headers = { },
  status = 400
}

@olimorris
Copy link
Owner

Can you share the steps you take to get this error? I can try and recreate it

@flovilmart
Copy link
Contributor Author

Sure!
Minimal config:

      require("codecompanion").setup({
        strategies = {
          chat = {
            adapter = "mistral",
            tools = {
              opts = {
                -- this is needed for mistral to work properly
                -- auto_submit_errors = true, -- needed to avoid the errors
              },
            }
          },

        },
        opts = {
          log_level = "DEBUG"
        },
      })
  1. Open CodeCompanionChat
  2. Enter @{search_web} for weather in Montreal
  3. Send the command
  4. Notice the error:
## Me

> Context:
> - <tool>search_web</tool>

@{search_web} for weather in Montreal

## CodeCompanion (Mistral)


Error searching for `weather in Montreal`

## Me

> Context:
> - <tool>search_web</tool>
  1. Send another command:
## Me

> Context:
> - <tool>search_web</tool>

What's up
  1. notice the error:
body = "{\"object\":\"error\",\"message\":\"Unexpected role 'user' after role 'tool'\",\"type\":\"invalid_request_message_order\",\"param\":null,\"code\":\"3230\"}",

@olimorris
Copy link
Owner

For full transparency, I've got the message stack of both Copilot and Mistral, with some messages removed for brevity:

Copilot:

local messages = 
{ {
    _meta = {
      cycle = 1,
      id = 841883897,
      index = 3,
      sent = true
    },
    content = "Can you use the search_web tool and tell me the weather in London?",
    opts = {
      visible = true
    },
    role = "user"
  }, {
    _meta = {
      cycle = 1,
      id = 1789836948,
      index = 5
    },
    opts = {
      visible = false
    },
    role = "llm",
    tools = {
      calls = { {
          _index = 0,
          ["function"] = {
            arguments = '{"domains":[],"query":"current weather in London"}',
            name = "search_web"
          },
          id = "call_83qmnBnz3baNyrQjxU8mlCKG",
          type = "function"
        } }
    }
  }, {
    _meta = {
      cycle = 1,
      id = 12606342
    },
    content = "Error searching for `current weather in London`",
    opts = {
      visible = true
    },
    role = "tool",
    tools = {
      call_id = "call_83qmnBnz3baNyrQjxU8mlCKG"
    }
  }, {
    _meta = {
      cycle = 2,
      id = 405742781,
      index = 7,
      sent = true
    },
    content = "What went wrong?",
    opts = {
      visible = true
    },
    role = "user"
  }, {
    _meta = {
      cycle = 2,
      id = 663314293,
      index = 8
    },
    content = "The search tool encountered an error while trying to retrieve information about the current weather in London. This could be due to a temporary connectivity issue, a problem with the search service, or a restriction on accessing real-time weather data.\n\nWould you like me to try again or assist with something else?",
    opts = {
      visible = true
    },
    role = "llm"
  } }

Mistral:

local messages = 
{ {
    _meta = {
      cycle = 1,
      id = 841883897,
      index = 3,
      sent = true
    },
    content = "Can you use the search_web tool and tell me the weather in London?",
    opts = {
      visible = true
    },
    role = "user"
  }, {
    _meta = {
      cycle = 1,
      id = 1188603807,
      index = 5
    },
    opts = {
      visible = false
    },
    role = "llm",
    tools = {
      calls = { {
          ["function"] = {
            arguments = '{"query": "weather in London", "domains": []}',
            name = "search_web"
          },
          id = "K2gTwq1e7"
        } }
    }
  }, {
    _meta = {
      cycle = 1,
      id = 364323211
    },
    content = "Error searching for `weather in London`",
    opts = {
      visible = true
    },
    role = "tool",
    tools = {
      call_id = "K2gTwq1e7"
    }
  }, {
    _meta = {
      cycle = 2,
      id = 442009274,
      index = 7,
      sent = true
    },
    content = "Thanks. What went wrong?",
    opts = {
      visible = true
    },
    role = "user"
  } }

and I get this error with Mistral:

Error: {"object":"error","message":"Unexpected role 'user' after role 'tool'","type":"invalid_request_message_order","param":null,"
code":"3230"}

Both stacks are the same and it looks like Mistral goes against the OpenAI standard by mandating User -> Tool -> LLM. When we're triggering the error we're getting User -> Tool -> User.

So the options I can think of, are:

  1. Ask all mistral users to turn on auto_submit_errors (whatever the option's called); or
  2. Detect this pattern in form_messages and insert a simple LLM message which says The tool failed or words to that effect`.

@flovilmart
Copy link
Contributor Author

@olimorris I'm glad you managed to reproduce it!
For now, let's mandate turning onauto_submit_errors and auto_submit_success when using mistral with tools.
That's an acceptable workaround for me!

@flovilmart
Copy link
Contributor Author

flovilmart commented Oct 24, 2025

@olimorris this approach could work too:

diff --git a/lua/codecompanion/adapters/http/mistral.lua b/lua/codecompanion/adapters/http/mistral.lua
index e0519df4..0e05af5d 100644
--- a/lua/codecompanion/adapters/http/mistral.lua
+++ b/lua/codecompanion/adapters/http/mistral.lua
@@ -15,6 +15,7 @@ return {
     stream = true,
     vision = true,
     tools = true,
+    tools_opts = { always_submit = true }
   },
   features = {
     text = true,
diff --git a/lua/codecompanion/strategies/chat/tools/init.lua b/lua/codecompanion/strategies/chat/tools/init.lua
index 87280703..025cdd35 100644
--- a/lua/codecompanion/strategies/chat/tools/init.lua
+++ b/lua/codecompanion/strategies/chat/tools/init.lua
@@ -238,13 +238,20 @@ function Tools:set_autocmds()
           if vim.g.codecompanion_yolo_mode then
             return auto_submit()
           end
+          if self.always_submit then
+            return auto_submit()
+          end
           if self.status == CONSTANTS.STATUS_ERROR and self.tools_config.opts.auto_submit_errors then
             return auto_submit()
           end
           if self.status == CONSTANTS.STATUS_SUCCESS and self.tools_config.opts.auto_submit_success then
             return auto_submit()
           end
 
           self:reset({ auto_submit = false })
         end)
       end
@@ -259,6 +266,7 @@ end
 function Tools:execute(chat, tools)
   local id = math.random(10000000)
   self.chat = chat
+  self.always_submit = chat.adapter.opts.tools_opts and chat.adapter.opts.tools_opts.always_submit
 
   -- Start edit tracking for all tools
   self:_start_edit_tracking(tools)

@kwibus
Copy link
Contributor

kwibus commented Oct 24, 2025

Hi @flovilmart I was working on a similar merge request #2246 that also enabled tool calling.
So we might be able to help each other.

I used openai.handlers.chat_output directly,And in my experimentation that did worked.
Bit i might have missed something.

I see you add _index instead of using the id like openai.
why did you add this?

about

body = "{"object":"error","message":"Unexpected role 'user' after role 'tool'","type":"invalid_request_message_order","param":null,"code":"3230"}",

I only got this error. when tool calling was interrupted somewhere.
So my assumption was that mistral gives this error when it call a tool but does not receive to result. but next user message
atleast we are not the only ones that get this error:

Hope that helps you

@flovilmart
Copy link
Contributor Author

I see you add _index instead of using the id like openai.
why did you add this?

if you look at the code I removed the _index as it was causing issues at one point with the mistral API:

{"detail":[{"type":"extra_forbidden","loc":["body","messages",2,"assistant","tool_calls",0,"_index"],"msg":"Extra inputs are not permitted","input":0},{"type":"extra_forbidden","loc":["body","messages",2,"assistant","tool_calls",1,"_index"],"msg":"Extra inputs are not permitted","input":1}]}
[ERROR] 2025-10-23 11:02:51
Error: {"detail":[{"type":"extra_forbidden","loc":["body","messages",2,"assistant","tool_calls",0,"_index"],"msg":"Extra inputs are not permitted","input":0},{"type":"extra_forbidden","loc":["body","messages",2,"assistant","tool_calls",1,"_index"],"msg":"Extra inputs are not permitted","input":1}]}

@kwibus
Copy link
Contributor

kwibus commented Oct 24, 2025

I did not run into that issue.

I am now comparing you branch with mine:
https://github.com/kwibus/codecompanion.nvim/blob/add-mistral-get-models/lua/codecompanion/adapters/http/mistral/init.lua (file has moved)

one change i see:

form_tools = function(self, params, messages)
form_tools = function(self, tools)

i think that are the wrong args. don`t know if that was the issue.

can you try again with openai.handlers.chat_output, or maybe check mine version?

@flovilmart
Copy link
Contributor Author

@kwibus it's great you didn't run into that issue. This is ready to merge as far as I can tell. we can probably revisit later but I'd rather not change the implementation that I got now - given I had edge cases that required to remove the _index from the function calls.

@kwibus
Copy link
Contributor

kwibus commented Oct 24, 2025

@flovilmart, no problem. I can try around a bit myself. if I can reproduce that issue, I will let you know.

@flovilmart
Copy link
Contributor Author

This may be dependent on the implementation of form_messages here, where there is a form of cleanup going on, and that may have been the cause of a the original issue - so it is possible that using the defualt implementation from the openai adapter works - I'm still wary and cautious

@olimorris
Copy link
Owner

Thanks for all your hard work on this @flovilmart.

@olimorris olimorris merged commit 2a2b294 into olimorris:main Oct 24, 2025
4 checks passed
@flovilmart
Copy link
Contributor Author

Thanks for putting up with me @olimorris !
I’ll keep an eye for the few items remaining:

  • reasoning
  • dealing with sending errors back without extra config
  • simplifying implementation as suggested by @kwibus

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

P4 Negligible impact and urgency

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants