Skip to content

What is the equivalent for batch overfitting for such a training scheme? #227

@tshrjn

Description

@tshrjn

I've the following understanding:

The idea is to see if the Generator, when trained exclusively on the small batch, can produce samples that the Discriminator thinks are real. If your network setup and loss functions are working correctly, the Discriminator should become uncertain about whether the samples are real or fake (i.e., its loss should hover around the value indicating a 50% guess). The Generator's loss should decrease, showing it's generating better samples.

How to achieve this quickly to ensure everything in training is setup properly. I'm not getting such a behavior.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions