You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you're planning multi-node AMD training, validate cluster networking first
104
+
with the [NCCL/RCCL tests](https://dstack.ai/examples/clusters/nccl-rccl-tests/)
105
+
example.
106
+
106
107
=== "TRL"
107
108
108
109
Below is an example of LoRA fine-tuning Llama 3.1 8B using [TRL](https://rocm.docs.amd.com/en/latest/how-to/llm-fine-tuning-optimization/single-gpu-fine-tuning-and-inference.html)
0 commit comments