-
Notifications
You must be signed in to change notification settings - Fork 26
Open
Description
There are two approaches based on CLIP that i'm trying to compare here
- A resnet 18 with a Bert base model - everything is updated during training
- A resnet 50 with a Bert base model - Bert is frozen
I get an OOM error in the second case on the cached model_forward step even though the second case uses lesser number of parameters during training (50 M vs 110 M).
To give some context, I'm using pytorch lightning with the functional decorator and it works well for the first case - providing a lot of benefits with bigger batch sizes during training
Any reason why this would happen ?
Metadata
Metadata
Assignees
Labels
No labels