Skip to content

Commit 9a0a6eb

Browse files
committed
fix config
1 parent 6e29202 commit 9a0a6eb

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

recipes/configs/llama3_2/3B_lora.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ loss:
7777
_component_: torchtune.modules.loss.LinearCrossEntropyLoss
7878

7979
# Training
80-
epochs: 2
80+
epochs: 1
8181
max_steps_per_epoch: null
8282
gradient_accumulation_steps: 8 # Use to increase effective batch size
8383
clip_grad_norm: null

torchtune/training/checkpointing/_checkpoint_client.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -365,6 +365,7 @@ def save_checkpoint(
365365
checkpointer user has configured.
366366
"""
367367
intermediate_checkpoint = epoch + 1 < training_progress.total_epochs
368+
368369
if intermediate_checkpoint and self._enable_async_checkpointing:
369370
self._save_checkpoint_async(
370371
model,

0 commit comments

Comments
 (0)