Skip to content

Ap loss #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
7 commits merged into from
Apr 2, 2020
Merged

Ap loss #1

7 commits merged into from
Apr 2, 2020

Conversation

ghost
Copy link

@ghost ghost commented Apr 1, 2020

Add AP loss classes and Sampler class for sampling non-correspondences.
Add support for loss choice in training class

@ghost ghost requested a review from swiatkowski April 1, 2020 20:20
Copy link
Owner

@swiatkowski swiatkowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please address my comments, but looks good in general!


loss_function.cuda()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?

@@ -288,8 +298,10 @@ def run(self, loss_current_iteration=0, use_pretrained=False):
# TPV = TrainingProgressVisualizer()

for epoch in range(50): # loop over the dataset multiple times

print "Epoch {}/50".format(epoch)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define 50 as variable?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Up :)

@@ -643,6 +643,16 @@ def get_within_scene_data(self, scene_name, metadata, for_synthetic_multi_object
image_a_depth_numpy = np.asarray(image_a_depth)
image_b_depth_numpy = np.asarray(image_b_depth)

image_a_mask_numpy = np.asarray(image_a_mask)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be behind some flag? Like loss==controstive-loss etc?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a workaround of this issue: RobotLocomotion/pytorch-dense-correspondence#204
Short story: few masks files are empty. If so skip this training sample.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gasiortomasz What I meant on Slack is that we should add a comment which you wrote here, but in the code above this section of the code.

@@ -0,0 +1,139 @@
import numpy as np
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets add a comment on top of this file where this code comes from. This will give more context to the readers and might help with debugging. Also, please clarify what is directly taken from the R2D2 code and what you needed to change/add.

@ghost ghost merged commit a02fa0c into master Apr 2, 2020
@@ -643,6 +643,16 @@ def get_within_scene_data(self, scene_name, metadata, for_synthetic_multi_object
image_a_depth_numpy = np.asarray(image_a_depth)
image_b_depth_numpy = np.asarray(image_b_depth)

image_a_mask_numpy = np.asarray(image_a_mask)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gasiortomasz What I meant on Slack is that we should add a comment which you wrote here, but in the code above this section of the code.

import torch.nn as nn
import torch.nn.functional as F

# this class is taken from https://github.com/naver/r2d2/blob/master/nets/ap_loss.py
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gasiortomasz Usually you would include such comments at the of the docstring of the class. Here behind the line starting "Note: typically..."

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This applies to all the comments you added here.

@@ -288,8 +298,10 @@ def run(self, loss_current_iteration=0, use_pretrained=False):
# TPV = TrainingProgressVisualizer()

for epoch in range(50): # loop over the dataset multiple times

print "Epoch {}/50".format(epoch)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Up :)

@ghost ghost deleted the ap_loss branch April 5, 2020 19:11
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant