You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-97
Original file line number
Diff line number
Diff line change
@@ -8,14 +8,14 @@
8
8
<br>
9
9
</p>
10
10
11
-
Generate text with distributed **Llama 2 (70B)**, **Stable Beluga 2**, **Falcon**, **Guanaco-65B** or **BLOOM-176B** and fine‑tune them for your own tasks — right from your desktop computer or Google Colab:
11
+
Generate text with distributed **Llama 2** (70B), **Falcon** (40B+), **BLOOM**(176B) (or their derivatives), and fine‑tune them for your own tasks — right from your desktop computer or Google Colab:
12
12
13
13
```python
14
14
from transformers import AutoTokenizer
15
15
from petals import AutoDistributedModelForCausalLM
16
16
17
17
# Choose any model available at https://health.petals.dev
18
-
model_name ="petals-team/StableBeluga2"
18
+
model_name ="petals-team/StableBeluga2"# This one is fine-tuned Llama 2 (70B)
19
19
20
20
# Connect to a distributed network hosting model layers
@@ -31,9 +31,9 @@ print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
31
31
🚀 <b><a href="https://colab.research.google.com/drive/1uCphNY7gfAUkdDrTx21dZZwCOUDCMPw8?usp=sharing">Try now in Colab</a></b>
32
32
</p>
33
33
34
-
🦙**Want to run Llama 2?** Request access to its weights at the ♾️ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and 🤗 [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), then run `huggingface-cli login` in the terminal before loading the model. Or just try it in our [chatbot app](https://chat.petals.dev).
34
+
🔏**Privacy.** Your data will be processed with the help of other people in the public swarm. Learn more about privacy [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). For sensitive data, you can set up a [private swarm](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) among people you trust.
35
35
36
-
🔏**Privacy.** Your data will be processed by other people in the public swarm. Learn more about privacy [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). For sensitive data, you can set up a [private swarm](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) among people you trust.
36
+
🦙**Want to run Llama 2?** Request access to its weights at the ♾️ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and 🤗 [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), then run `huggingface-cli login` in the terminal before loading the model. Or just try it in our [chatbot app](https://chat.petals.dev).
37
37
38
38
💬 **Any questions?** Ping us in [our Discord](https://discord.gg/KdThf2bWVU)!
- Petals runs large language models like [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) and [BLOOM](https://huggingface.co/bigscience/bloom)**collaboratively** — you load a small part of the model, then join people serving the other parts to run inference or fine-tuning.
85
-
- Single-batch inference runs at **up to 6 steps/sec** for **Llama 2** (70B) and ≈ 1 step/sec for BLOOM-176B. This is [up to 10x faster](https://github.com/bigscience-workshop/petals#benchmarks) than offloading, enough to build [chatbots](https://chat.petals.dev) and other interactive apps. Parallel inference reaches hundreds of tokens/sec.
86
-
- Beyond classic language model APIs — you can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of PyTorch.
84
+
- You load a small part of the model, then join a [network](https://health.petals.dev) of people serving the other parts. Single‑batch inference runs at up to **6 tokens/sec** for **Llama 2** (70B) and up to **4 tokens/sec** for **Falcon** (180B) — enough for [chatbots](https://chat.petals.dev) and interactive apps.
85
+
- You can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of **PyTorch** and **🤗 Transformers**.
<th colspan="6">Offloading, max. possible speed on 1x A100 <sup>1</sup></th>
140
-
</tr>
141
-
<tralign="center">
142
-
<td>256 Gbit/s</td>
143
-
<td></td>
144
-
<td>0.18</td>
145
-
<td>0.18</td>
146
-
<td>2.7</td>
147
-
<td>170.3</td>
148
-
</tr>
149
-
<tralign="center">
150
-
<td>128 Gbit/s</td>
151
-
<td></td>
152
-
<td>0.09</td>
153
-
<td>0.09</td>
154
-
<td>2.4</td>
155
-
<td>152.8</td>
156
-
</tr>
157
-
<tr>
158
-
<th colspan="6">Petals on 14 heterogeneous servers across Europe and North America <sup>2</sup></th>
159
-
</tr>
160
-
<tralign="center">
161
-
<td colspan="2">Real world</td>
162
-
<td>0.83</td>
163
-
<td>0.79</td>
164
-
<td>32.6</td>
165
-
<td>179.4</td>
166
-
</tr>
167
-
<tr>
168
-
<th colspan="6">Petals on 3 servers, with one A100 each <sup>3</sup></th>
169
-
</tr>
170
-
<tralign="center">
171
-
<td>1 Gbit/s</td>
172
-
<td>< 5 ms</td>
173
-
<td>1.71</td>
174
-
<td>1.54</td>
175
-
<td>70.0</td>
176
-
<td>253.6</td>
177
-
</tr>
178
-
<tralign="center">
179
-
<td>100 Mbit/s</td>
180
-
<td>< 5 ms</td>
181
-
<td>1.66</td>
182
-
<td>1.49</td>
183
-
<td>56.4</td>
184
-
<td>182.0</td>
185
-
</tr>
186
-
<tralign="center">
187
-
<td>100 Mbit/s</td>
188
-
<td>100 ms</td>
189
-
<td>1.23</td>
190
-
<td>1.11</td>
191
-
<td>19.7</td>
192
-
<td>112.2</td>
193
-
</tr>
194
-
</table>
195
-
196
-
<sup>1</sup> **An upper bound for offloading performance.** We base our offloading numbers on the best possible hardware setup for offloading: CPU RAM offloading via PCIe 4.0 with 16 PCIe lanes per GPU and PCIe switches for pairs of GPUs. We assume zero latency for the upper bound estimation. In 8-bit, the model uses 1 GB of memory per billion parameters. PCIe 4.0 with 16 lanes has a throughput of 256 Gbit/s, so offloading 176B parameters takes 5.5 seconds. The throughput is twice as slow (128 Gbit/s) if we have two GPUs behind the same PCIe switch.
197
-
198
-
<sup>2</sup> **A real-world distributed setting** with 14 servers holding 2× RTX 3060, 4× 2080Ti, 2× 3090, 2× A4000, and 4× A5000 GPUs. These are personal servers and servers from university labs, spread across Europe and North America and connected to the Internet at speeds of 100–1000 Mbit/s. 4 servers operate from under firewalls.
199
-
200
-
<sup>3</sup> **An optimistic setup** that requires least communication. The client nodes have 8 CPU cores and no GPU.
201
-
202
-
We provide more evaluations and discuss these results in more detail in **Section 3.3** of our [paper](https://arxiv.org/pdf/2209.01188.pdf).
203
-
204
-
## 🛠️ Contributing
115
+
### Benchmarks
116
+
117
+
Please see **Section 3.3** of our [paper](https://arxiv.org/pdf/2209.01188.pdf).
118
+
119
+
### 🛠️ Contributing
205
120
206
121
Please see our [FAQ](https://github.com/bigscience-workshop/petals/wiki/FAQ:-Frequently-asked-questions#contributing) on contributing.
207
122
208
-
## 📜 Citation
123
+
###📜 Citation
209
124
210
125
Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel.
211
126
[Petals: Collaborative Inference and Fine-tuning of Large Models.](https://arxiv.org/abs/2209.01188)
0 commit comments