Instead of adding a random floating point value, we should replace class $q$ with class $r$ with a specified probability $p$. Specifically see this in `cgtnnlib/training.py`: ```python for i, (inputs, labels) in enumerate(dataset.data.train_loader): if is_classification_task(dataset.learning_task): pass # XXX # labels = add_noise_to_labels_classification( # labels=labels, # generate_sample=noise_generator.next_sample, # ) else: labels = add_noise_to_labels_regression( labels=labels, generate_sample=noise_generator.next_sample, ).to(torch.float32 ```