Skip to content

non-zero delta for early stopping #319

@mahdis-saeedi

Description

@mahdis-saeedi

Hi @hosseinfani

In the training phase of the model imdb.mt10.ts2.e100.ns5.lr0.001.es5.h128.speTrue.lbce.tpw1.tnw0.none.merge_false with batch size 5, validation loss is too low and train loss is exactly 0 from the first epoch, and this consistently continue until the epoch 99. this is also repeated for all folds.

Image

If i correctly understand from your explanation in our last meeting, in this setting it happens because we don't have any negative samples and the loss is computed only for positive samples which are a few. So this leads to dropping the loss in this case.

Now if we can allocate a small positive number to delta in the following line, then the model will be early stopped without continuing until the last epoch.

elif score < self.best_score + self.delta:

Originally posted by @mahdis-saeedi in #305

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions