File tree Expand file tree Collapse file tree 2 files changed +6
-33
lines changed Expand file tree Collapse file tree 2 files changed +6
-33
lines changed Original file line number Diff line number Diff line change @@ -6,6 +6,8 @@ Run the examples in this directory with:
6
6
python3 examples/< example> .py
7
7
```
8
8
9
+ See [ ollama/docs/api.md] ( https://github.com/ollama/ollama/blob/main/docs/api.md ) for full API documentation
10
+
9
11
### Chat - Chat with a model
10
12
- [ chat.py] ( chat.py )
11
13
- [ async-chat.py] ( async-chat.py )
@@ -50,12 +52,8 @@ Requirement: `pip install tqdm`
50
52
51
53
52
54
### Ollama Create - Create a model from a Modelfile
53
- ``` python
54
- python create.py < model> < modelfile>
55
- ```
56
55
- [ create.py] ( create.py )
57
56
58
- See [ ollama/docs/modelfile.md] ( https://github.com/ollama/ollama/blob/main/docs/modelfile.md ) for more information on the Modelfile format.
59
57
60
58
61
59
### Ollama Embed - Generate embeddings with a model
Original file line number Diff line number Diff line change 1
- import sys
1
+ from ollama import Client
2
2
3
- from ollama import create
4
-
5
-
6
- args = sys .argv [1 :]
7
- if len (args ) == 2 :
8
- # create from local file
9
- path = args [1 ]
10
- else :
11
- print ('usage: python create.py <name> <filepath>' )
12
- sys .exit (1 )
13
-
14
- # TODO: update to real Modelfile values
15
- modelfile = f"""
16
- FROM { path }
17
- """
18
- example_modelfile = """
19
- FROM llama3.2
20
- # sets the temperature to 1 [higher is more creative, lower is more coherent]
21
- PARAMETER temperature 1
22
- # sets the context window size to 4096, this controls how many tokens the LLM can use as context to generate the next token
23
- PARAMETER num_ctx 4096
24
-
25
- # sets a custom system message to specify the behavior of the chat assistant
26
- SYSTEM You are Mario from super mario bros, acting as an assistant.
27
- """
28
-
29
- for response in create (model = args [0 ], modelfile = modelfile , stream = True ):
30
- print (response ['status' ])
3
+ client = Client ()
4
+ response = client .create (model = 'my-assistant' , from_ = 'llama3.2' , stream = False )
5
+ print (response .status )
You can’t perform that action at this time.
0 commit comments