Skip to content

Commit 14da65e

Browse files
bimal-gajerastevhliu
authored andcommitted
Update model card for Cohere (huggingface#37056)
* Update Cohere model card to follow standard template * Update docs/source/en/model_doc/cohere.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/cohere.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/cohere.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/cohere.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/cohere.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/cohere.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update cohere.md Update code snippet for AutoModel, quantization, and transformers-cli * Update cohere.md * Update docs/source/en/model_doc/cohere.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
1 parent 3a18c76 commit 14da65e

File tree

1 file changed

+71
-82
lines changed

1 file changed

+71
-82
lines changed

docs/source/en/model_doc/cohere.md

+71-82
Original file line numberDiff line numberDiff line change
@@ -1,124 +1,115 @@
1-
# Cohere
2-
3-
<div class="flex flex-wrap space-x-1">
4-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
5-
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
6-
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
1+
<div style="float: right;">
2+
<div class="flex flex-wrap space-x-1">
3+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
4+
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
5+
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
6+
</div>
77
</div>
88

9-
## Overview
10-
11-
The Cohere Command-R model was proposed in the blogpost [Command-R: Retrieval Augmented Generation at Production Scale](https://txt.cohere.com/command-r/) by the Cohere Team.
12-
13-
The abstract from the paper is the following:
149

15-
*Command-R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise. Today, we are introducing Command-R, a new LLM aimed at large-scale production workloads. Command-R targets the emerging “scalable” category of models that balance high efficiency with strong accuracy, enabling companies to move beyond proof of concept, and into production.*
10+
# Cohere
1611

17-
*Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with our industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command-R boasts:
18-
- Strong accuracy on RAG and Tool Use
19-
- Low latency, and high throughput
20-
- Longer 128k context and lower pricing
21-
- Strong capabilities across 10 key languages
22-
- Model weights available on HuggingFace for research and evaluation
12+
Cohere Command-R is a 35B parameter multilingual large language model designed for long context tasks like retrieval-augmented generation (RAG) and calling external APIs and tools. The model is specifically trained for grounded generation and supports both single-step and multi-step tool use. It supports a context length of 128K tokens.
2313

24-
Checkout model checkpoints [here](https://huggingface.co/CohereForAI/c4ai-command-r-v01).
25-
This model was contributed by [Saurabh Dash](https://huggingface.co/saurabhdash) and [Ahmet Üstün](https://huggingface.co/ahmetustun). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox).
14+
You can find all the original Command-R checkpoints under the [Command Models](https://huggingface.co/collections/CohereForAI/command-models-67652b401665205e17b192ad) collection.
2615

27-
## Usage tips
2816

29-
<Tip warning={true}>
17+
> [!TIP]
18+
> Click on the Cohere models in the right sidebar for more examples of how to apply Cohere to different language tasks.
3019
31-
The checkpoints uploaded on the Hub use `torch_dtype = 'float16'`, which will be
32-
used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`.
20+
The example below demonstrates how to generate text with [`Pipeline`] or the [`AutoModel`], and from the command line.
3321

34-
The `dtype` of the online weights is mostly irrelevant unless you are using `torch_dtype="auto"` when initializing a model using `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online), then it will be casted to the default `dtype` of `torch` (becomes `torch.float32`), and finally, if there is a `torch_dtype` provided in the config, it will be used.
22+
<hfoptions id="usage">
23+
<hfoption id="Pipeline">
3524

36-
Training the model in `float16` is not recommended and is known to produce `nan`; as such, the model should be trained in `bfloat16`.
25+
```python
26+
import torch
27+
from transformers import pipeline
28+
29+
pipeline = pipeline(
30+
task="text-generation",
31+
model="CohereForAI/c4ai-command-r-v01",
32+
torch_dtype=torch.float16,
33+
device=0
34+
)
35+
pipeline("Plants create energy through a process known as")
36+
```
3737

38-
</Tip>
39-
The model and tokenizer can be loaded via:
38+
</hfoption>
39+
<hfoption id="AutoModel">
4040

4141
```python
42-
# pip install transformers
42+
import torch
4343
from transformers import AutoTokenizer, AutoModelForCausalLM
4444

45-
model_id = "CohereForAI/c4ai-command-r-v01"
46-
tokenizer = AutoTokenizer.from_pretrained(model_id)
47-
model = AutoModelForCausalLM.from_pretrained(model_id)
45+
tokenizer = AutoTokenizer.from_pretrained("CohereForAI/c4ai-command-r-v01")
46+
model = AutoModelForCausalLM.from_pretrained("CohereForAI/c4ai-command-r-v01", torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
4847

49-
# Format message with the command-r chat template
50-
messages = [{"role": "user", "content": "Hello, how are you?"}]
51-
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
52-
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
53-
54-
gen_tokens = model.generate(
48+
# format message with the Command-R chat template
49+
messages = [{"role": "user", "content": "How do plants make energy?"}]
50+
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
51+
output = model.generate(
5552
input_ids,
5653
max_new_tokens=100,
5754
do_sample=True,
5855
temperature=0.3,
59-
)
60-
61-
gen_text = tokenizer.decode(gen_tokens[0])
62-
print(gen_text)
56+
cache_implementation="static",
57+
)
58+
print(tokenizer.decode(output[0], skip_special_tokens=True))
6359
```
6460

65-
- When using Flash Attention 2 via `attn_implementation="flash_attention_2"`, don't pass `torch_dtype` to the `from_pretrained` class method and use Automatic Mixed-Precision training. When using `Trainer`, it is simply specifying either `fp16` or `bf16` to `True`. Otherwise, make sure you are using `torch.autocast`. This is required because the Flash Attention only support `fp16` and `bf16` data type.
66-
61+
</hfoption>
62+
<hfoption id="transformers-cli">
6763

68-
## Resources
64+
```bash
65+
# pip install -U flash-attn --no-build-isolation
66+
transformers-cli chat --model_name_or_path CohereForAI/c4ai-command-r-v01 --torch_dtype auto --attn_implementation flash_attention_2
67+
```
6968

70-
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Command-R. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
69+
</hfoption>
70+
</hfoptions>
7171

72+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
7273

73-
<PipelineTag pipeline="text-generation"/>
74+
The example below uses [bitsandbytes](../quantization/bitsandbytes) to quantize the weights to 4-bits.
7475

75-
Loading FP16 model
7676
```python
77-
# pip install transformers
78-
from transformers import AutoTokenizer, AutoModelForCausalLM
79-
80-
model_id = "CohereForAI/c4ai-command-r-v01"
81-
tokenizer = AutoTokenizer.from_pretrained(model_id)
82-
model = AutoModelForCausalLM.from_pretrained(model_id)
77+
import torch
78+
from transformers import BitsAndBytesConfig, AutoTokenizer, AutoModelForCausalLM
8379

84-
# Format message with the command-r chat template
85-
messages = [{"role": "user", "content": "Hello, how are you?"}]
86-
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
87-
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
80+
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
81+
tokenizer = AutoTokenizer.from_pretrained("CohereForAI/c4ai-command-r-v01")
82+
model = AutoModelForCausalLM.from_pretrained("CohereForAI/c4ai-command-r-v01", torch_dtype=torch.float16, device_map="auto", quantization_config=bnb_config, attn_implementation="sdpa")
8883

89-
gen_tokens = model.generate(
84+
# format message with the Command-R chat template
85+
messages = [{"role": "user", "content": "How do plants make energy?"}]
86+
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
87+
output = model.generate(
9088
input_ids,
9189
max_new_tokens=100,
9290
do_sample=True,
9391
temperature=0.3,
94-
)
95-
96-
gen_text = tokenizer.decode(gen_tokens[0])
97-
print(gen_text)
92+
cache_implementation="static",
93+
)
94+
print(tokenizer.decode(output[0], skip_special_tokens=True))
9895
```
9996

100-
Loading bitsnbytes 4bit quantized model
101-
```python
102-
# pip install transformers bitsandbytes accelerate
103-
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
97+
Use the [AttentionMaskVisualizer](https://github.com/huggingface/transformers/blob/beb9b5b02246b9b7ee81ddf938f93f44cfeaad19/src/transformers/utils/attention_visualizer.py#L139) to better understand what tokens the model can and cannot attend to.
10498

105-
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
99+
```py
100+
from transformers.utils.attention_visualizer import AttentionMaskVisualizer
106101

107-
model_id = "CohereForAI/c4ai-command-r-v01"
108-
tokenizer = AutoTokenizer.from_pretrained(model_id)
109-
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
102+
visualizer = AttentionMaskVisualizer("CohereForAI/c4ai-command-r-v01")
103+
visualizer("Plants create energy through a process known as")
104+
```
110105

111-
gen_tokens = model.generate(
112-
input_ids,
113-
max_new_tokens=100,
114-
do_sample=True,
115-
temperature=0.3,
116-
)
106+
<div class="flex justify-center">
107+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/cohere-attn-mask.png"/>
108+
</div>
117109

118-
gen_text = tokenizer.decode(gen_tokens[0])
119-
print(gen_text)
120-
```
121110

111+
## Notes
112+
- Don’t use the torch_dtype parameter in [`~AutoModel.from_pretrained`] if you’re using FlashAttention-2 because it only supports fp16 or bf16. You should use [Automatic Mixed Precision](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html), set fp16 or bf16 to True if using [`Trainer`], or use [torch.autocast](https://pytorch.org/docs/stable/amp.html#torch.autocast).
122113

123114
## CohereConfig
124115

@@ -143,5 +134,3 @@ print(gen_text)
143134

144135
[[autodoc]] CohereForCausalLM
145136
- forward
146-
147-

0 commit comments

Comments
 (0)