We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent a795199 commit 2f662c2Copy full SHA for 2f662c2
docs/source/developer_guides/lora.md
@@ -370,7 +370,7 @@ special_tokens = ['<|start_think|>', '<|stop_think|>']
370
tokenizer.add_special_tokens({'additional_special_tokens': special_tokens})
371
372
# make room for new tokens in the embedding matrix if it isn't big enough already
373
-base_model.resize_token_embeddings(max(len(tokenizer), base_model.model.embed_tokens.num_embeddings)
+base_model.resize_token_embeddings(max(len(tokenizer), base_model.model.embed_tokens.num_embeddings))
374
375
# typical LoRA config with `trainable_token_indices` targeting embedding layer `embed_tokens`
376
# and specifically our new tokens we just added
0 commit comments