-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Hi,
Thanks again for publishing your work!
I am trying to reproduce the camera and object tracking results in Table I and Table II in the RAL paper. For camera tracking of manipulation and rotation sequence, I have used "-init tf -init_frame camera_true" as arguments, and use the logged poses-0.txt as ground truth camera poses. Then I evaluate ATE using the evaluate_ate.py provided by TUM-RGBD dataset. I got comparable results as in the paper.
But for the object tracking, it seems that I can not use -init tf to get the ground truth pose. How should I evaluate the object tracking trajectory quantitively? Could you provide some more details on the steps for getting results in the tables? Thanks!
Metadata
Metadata
Assignees
Labels
No labels