You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The base model repository doesn't have a chat_template defined in its tokenizer_config.json
However, during LoRA fine-tuning, I applied Alpaca template(as a default)
After the LoRA fine-tuning was completed, the prompt_style.yaml file contains the chat template(called in prompt_style), but the tokenizer_config.json still has the same configuration as the base model (i.e., without a chat_template).
Will this cause any issues when loading HF-converted model?
(I think tokenizer_config.json's chat_template must be edited by trained prompt_style..)
The text was updated successfully, but these errors were encountered:
Hi. I appreciate your great work.
I fintuned
llama-3.2-1B
base model using my custom datalitgpt finetune_lora "meta-llama/Llama-3.2-1B" \ --data JSON \ --data.json_path /data/donggukang/litgpt/OpenMathInstruct-2-1K_litgpt_format.json \ --data.val_split_fraction 0.1
The base model repository doesn't have a
chat_template
defined in itstokenizer_config.json
However, during LoRA fine-tuning, I applied Alpaca template(as a default)
After the LoRA fine-tuning was completed, the
prompt_style.yaml
file contains the chat template(called in prompt_style), but the tokenizer_config.json still has the same configuration as the base model (i.e., without a chat_template).Will this cause any issues when loading HF-converted model?
(I think tokenizer_config.json's chat_template must be edited by trained prompt_style..)
The text was updated successfully, but these errors were encountered: