-
-
Notifications
You must be signed in to change notification settings - Fork 333
add get_models for mistral. like how ollama get_models work #2246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I'll be fine to accept this PR, but we need to be able to switch off reasoning and function calling if the models don't support it, like we do with Copilot. The reason I've not allowed this in the OpenAI adapter thus far is because the models endpoint was so primitive. |
|
Yeah, I my merge request I can use capabilities. to check if it support vision and tool calling. I wanted to check if openai worked the same way. But they don't have |
There are still models like: * ministral-3b-2410 that incorrect say it does not support chat. * mistral-ocr-latest that incorrect say it does support chat. Asked around on mistral discord. And they confirmed this a bug. And told my the raised a issue in the bug tracker.
Those model still work. But list of models is already on the long side. This will make list short, And remove models which have a better version
No reason for the user to overwrite this.
61173ce to
bad6a7d
Compare
|
I am relatively happy with how the code is now:
|
94a5dfe to
1a4a2ca
Compare
Description
Call mistral
v1/modelsto get up to date list of models. And check which capabilities they haveScreenshots
TODO
Proof of concept work, But there are still thing i need to work on
Is there documentation which models should work with chat? Can this be implemented in cleaner way?
env.models_endpointoverkillvisionandcan_use_toolsare set after setup. This might have downsides.For example: you either never get the error message like
"The image Slash Command is not enabled for this adapter"because it thinks all models can use vision. because ituses opts.vision. set before setup.
might be worth it to solve it like
get_models.check_thinking_capabilityChecklist
make allto ensure docs are generated, tests pass and my formatting is appliedCodeCompanion.hasin the init.lua file for my new feature