Skip to content

Commit c28a8b4

Browse files
Backport PR #1235: Add information about ollama - document it as an available provider and provide clearer troubleshooting help. (#1239)
Co-authored-by: Fernando Pérez <fperez.net@gmail.com>
1 parent e0199be commit c28a8b4

File tree

1 file changed

+17
-1
lines changed

1 file changed

+17
-1
lines changed

docs/source/users/index.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -439,7 +439,22 @@ models.
439439

440440
### Ollama usage
441441

442-
To get started, follow the instructions on the [Ollama website](https://ollama.com/) to set up `ollama` and download the models locally. To select a model, enter the model name in the settings panel, for example `deepseek-coder-v2`.
442+
To get started, follow the instructions on the [Ollama website](https://ollama.com/) to set up `ollama` and download the models locally. To select a model, enter the model name in the settings panel, for example `deepseek-coder-v2`. You can see all locally available models with `ollama list`.
443+
444+
For the Ollama models to be available to JupyterLab-AI, your Ollama server _must_ be running. You can check that this is the case by calling `ollama serve` at the terminal, and should see something like:
445+
446+
```
447+
$ ollama serve
448+
Error: listen tcp 127.0.0.1:11434: bind: address already in use
449+
```
450+
451+
In some platforms (e.g. macOS or Windows), there may also be a graphical user interface or application that lets you start/stop the Ollama server from a menu.
452+
453+
:::{tip}
454+
If you don't see Ollama listed as a model provider in the Jupyter-AI configuration box, despite confirming that your Ollama server is active, you may be missing the [`langchain-ollama` python package](https://pypi.org/project/langchain-ollama/) that is necessary for Jupyter-AI to interface with Ollama, as indicated in the [model providers](#model-providers) section above.
455+
456+
You can install it with `pip install langchain-ollama` (as of Feb'2025 it is not available on conda-forge).
457+
:::
443458

444459
### vLLM usage
445460

@@ -710,6 +725,7 @@ We currently support the following language model providers:
710725
- `cohere`
711726
- `huggingface_hub`
712727
- `nvidia-chat`
728+
- `ollama`
713729
- `openai`
714730
- `openai-chat`
715731
- `sagemaker-endpoint`

0 commit comments

Comments
 (0)