-
Couldn't load subscription status.
- Fork 107
Description
Describe the bug
The IdentitiyTask (under graphnet.models.task) applies learnable parameters due to inheritance from LearnedTask and StandardLearnedTask.
To Reproduce
Steps to reproduce the behavior:
- Try and run a script like:
import torch
from graphnet.models.task import IdentityTask`
task1 = IdentityTask()
task2 = IdentityTask()
data = torch.ones(3,4)
print(data)
out1 = task1(data)
out2 = task2(data)
print(out1)
print(out2)
This will throw errors because the inhertance from LearnedTask and StandardLearnedTask require arguments like 'hidden_size'.
- With a working code like
import torch
from graphnet.models.task import IdentityTask
from graphnet.training.loss_functions import MSELoss
task1 = IdentityTask(nb_outputs = 1, target_labels=['skip'], hidden_size=4, loss_function=MSELoss())
task2 = IdentityTask(nb_outputs = 1, target_labels=['skip'], hidden_size=4, loss_function=MSELoss())
data = torch.ones(3,4)
print(data)
out1 = task1(data)
out2 = task2(data)
print(out1)
print(out2)
you will see that the different tasks return different tensors when actually they should both return the input tensor.
Expected behavior
The IdentityTask should return the input data and not apply learnable parameters
Additional context
This problem became apparent for me only because, for my application of the EasySyntax base class, I needed a task, although I did not actually use it. When running a distributed training with the 'ddp' strategy I got the error that there are unused parameters and I needed to use 'ddp_find_unused_parameters'. Therefore I assume that this bug is not a high priority and at least for my use case I can get around it. However people who actually use the IdentityTask might take more issue with that.