-
Notifications
You must be signed in to change notification settings - Fork 17
Description
Hi @hosseinfani
In the training phase of the model imdb.mt10.ts2.e100.ns5.lr0.001.es5.h128.speTrue.lbce.tpw1.tnw0.none.merge_false with batch size 5, validation loss is too low and train loss is exactly 0 from the first epoch, and this consistently continue until the epoch 99. this is also repeated for all folds.
![]()
If i correctly understand from your explanation in our last meeting, in this setting it happens because we don't have any negative samples and the loss is computed only for positive samples which are a few. So this leads to dropping the loss in this case.
Now if we can allocate a small positive number to delta in the following line, then the model will be early stopped without continuing until the last epoch.
OpeNTF/src/mdl/earlystopping.py
Line 31 in e918163
elif score < self.best_score + self.delta:
Originally posted by @mahdis-saeedi in #305