Skip to content
This repository was archived by the owner on Aug 7, 2025. It is now read-only.

Jac-Zac/ML_Project_LoRA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LoRA & DoRA in TinyGrad

This project demonstrates how to implement and apply LoRA (Low-Rank Adaptation) and DoRA (Weight-Decomposed Low-Rank Adaptation) techniques in TinyGrad. These methods allow efficient fine-tuning of deep learning models by injecting low-rank adapters into linear layers.


📊 View Computation Graph

To view the computation graph of the model:

GRAPH=1 ./test.py

🧠 What are LoRA & DoRA?

  • LoRA allows you to fine-tune only a small number of trainable parameters by inserting low-rank matrices into pre-trained models, preserving the original weights.

  • DoRA (Weight-Decomposed Low-Rank Adaptation) decomposes the pretrained weight matrix into two parts:

    • Magnitude, which is updated directly.
    • Direction, which is fine-tuned using low-rank adaptation (LoRA-style).

    This hybrid approach enables DoRA to approximate full fine-tuning performance more closely while maintaining LoRA's efficiency and zero inference overhead.

Watch Miss Coffee Bean's video for a friendly explanation and motivation behind these approaches.


📂 Project Structure

.
├── dora_tinygrad/           # DoRA implementation
│   └── modules/             # Base and linear module classes
├── lora_tinygrad/           # LoRA implementation
│   └── modules/             # Base and linear module classes
├── examples/                # Example scripts and utils
│   ├── example_lora.py      # End-to-end LoRA training + finetuning
│   ├── example_dora.py      # End-to-end DoRA training + finetuning
│   ├── mnist_example.ipynb  # Notebook to play with MNIST + LoRA
│   ├── test_lora.py         # Graph/debugging script for LoRA
│   └── utils.py             # Training, evaluation, misc helpers
├── test.py                  # Entry point for testing LoRA/DoRA
└── README.md                # You're here!

🚀 How to Run

1. Clone the Repo

git clone https://github.com/your-username/tinygrad-lora-dora
cd tinygrad-lora-dora

2. Set up Virtual Environment (Recommended)

python -m venv .env
source .env/bin/activate

Jupyter issues? Run: ipython kernel install --name "local-venv-kernel" --user

3. Install Dependencies

pip install -r requirements.txt

🧪 Try the Examples

Run LoRA Example:

python examples/example_lora.py

Run DoRA Example:

python examples/example_dora.py

📘 Notes

  • This project does not use external libraries like peft, transformers, or accelerate. It is meant to be educational and minimal.
  • TinyGrad is a great environment to understand low-level ML concepts. We leverage this simplicity to explain and explore LoRA and DoRA directly in the core logic.

📄 References


Happy fine-tuning with less memory! 🎉

About

LoRA implementation in Tinygrad

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages