Skip to content

Fix warnings in unit-tests tests #6159

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion test/test_datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -616,7 +616,6 @@ class VOCSegmentationTestCase(datasets_utils.ImageDatasetTestCase):
year=[f"20{year:02d}" for year in range(7, 13)], image_set=("train", "val", "trainval")
),
dict(year="2007", image_set="test"),
dict(year="2007-test", image_set="test"),
)

def inject_fake_data(self, tmpdir, config):
Expand Down
2 changes: 1 addition & 1 deletion torchvision/models/googlenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ def __init__(
num_classes: int = 1000,
aux_logits: bool = True,
transform_input: bool = False,
init_weights: Optional[bool] = None,
init_weights: bool = True,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not good. Check #2170 why they switched from True to None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I don't have good idea. Is there better method than I did here.
https://github.com/pytorch/vision/pull/6159/commits/16ddd6c0f23fe56e8029f336c5e3f3c69e9c534f

blocks: Optional[List[Callable[..., nn.Module]]] = None,
dropout: float = 0.2,
dropout_aux: float = 0.7,
Expand Down
2 changes: 1 addition & 1 deletion torchvision/models/inception.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ def __init__(
aux_logits: bool = True,
transform_input: bool = False,
inception_blocks: Optional[List[Callable[..., nn.Module]]] = None,
init_weights: Optional[bool] = None,
init_weights: bool = True,
dropout: float = 0.5,
) -> None:
super().__init__()
Expand Down
2 changes: 2 additions & 0 deletions torchvision/models/quantization/googlenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,8 @@ def forward(self, x: Tensor) -> Tensor:
class QuantizableGoogLeNet(GoogLeNet):
# TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659
def __init__(self, *args: Any, **kwargs: Any) -> None:
if "init_weights" not in kwargs:
kwargs["init_weights"] = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why in order to fix the warning on unit-tests we modify the models themselves. My expectations without having all the details in my mind, is that the tests are the ones that need to be updated. What am I missing?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first I was thinking that QuantizableGoogLeNet does not properly set up its base class GoogLeNet. Thinking more about that, I think you are right, we have to try again to fix that from tests side

Copy link
Collaborator

@vfdev-5 vfdev-5 Jun 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@puhuk can you check if we could use _model_params when we set kwargs = {**defaults, **_model_params.get(model_name, {})}. Looks like _model_params can define specific values for special models:

vision/test/test_models.py

Lines 246 to 247 in 93b3e84

_model_params = {
"inception_v3": {"input_shape": (1, 3, 299, 299)},

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let me check and send PR again.

super().__init__( # type: ignore[misc]
blocks=[QuantizableBasicConv2d, QuantizableInception, QuantizableInceptionAux], *args, **kwargs
)
Expand Down
1 change: 1 addition & 0 deletions torchvision/models/quantization/inception.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,7 @@ def __init__(
QuantizableInceptionE,
QuantizableInceptionAux,
],
init_weights=True,
)
self.quant = torch.ao.quantization.QuantStub()
self.dequant = torch.ao.quantization.DeQuantStub()
Expand Down