You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+14-31Lines changed: 14 additions & 31 deletions
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,7 @@
2
2
3
3
<palign="center">GPT4All runs large language models (LLMs) privately on everyday desktops & laptops. <br> <br> No API calls or GPUs required - you can just download the application and <ahref="https://docs.gpt4all.io/gpt4all_desktop/quickstart.html#quickstart">get started</a>
Copy file name to clipboardExpand all lines: gpt4all-bindings/python/docs/gpt4all_help/faq.md
+2-8Lines changed: 2 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -4,17 +4,11 @@
4
4
5
5
### Which language models are supported?
6
6
7
-
Our backend supports models with a `llama.cpp` implementation which have been uploaded to [HuggingFace](https://huggingface.co/).
7
+
We support models with a `llama.cpp` implementation which have been uploaded to [HuggingFace](https://huggingface.co/).
8
8
9
9
### Which embedding models are supported?
10
10
11
-
The following embedding models can be used within the application and with the `Embed4All` class from the `gpt4all` Python library. The default context length as GGUF files is 2048 but can be [extended](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF#description).
12
-
13
-
| Name | Initializing with `Embed4All`| Context Length | Embedding Length | File Size |
Most of the language models you will be able to access from HuggingFace have been trained as assistants. This guides language models to not just answer with relevant text, but *helpful* text.
@@ -75,16 +84,6 @@ If you want your LLM's responses to be helpful in the typical sense, we recommen
Directly calling `model.generate()` prompts the model without applying any templates.
@@ -150,3 +149,11 @@ The easiest way to run the text embedding model locally uses the [`nomic`](https
150
149

151
150
152
151
To learn more about making embeddings locally with `nomic`, visit our [embeddings guide](https://docs.nomic.ai/atlas/guides/embeddings#local-inference).
152
+
153
+
The following embedding models can be used within the application and with the `Embed4All` class from the `gpt4all` Python library. The default context length as GGUF files is 2048 but can be [extended](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF#description).
154
+
155
+
| Name| Using with `nomic`|`Embed4All` model name| Context Length| # Embedding Dimensions| File Size|
0 commit comments