Skip to content

How to get the quantitative results as in the paper? #6

@ung264

Description

@ung264

Hi,

Thanks again for publishing your work!

I am trying to reproduce the camera and object tracking results in Table I and Table II in the RAL paper. For camera tracking of manipulation and rotation sequence, I have used "-init tf -init_frame camera_true" as arguments, and use the logged poses-0.txt as ground truth camera poses. Then I evaluate ATE using the evaluate_ate.py provided by TUM-RGBD dataset. I got comparable results as in the paper.

But for the object tracking, it seems that I can not use -init tf to get the ground truth pose. How should I evaluate the object tracking trajectory quantitively? Could you provide some more details on the steps for getting results in the tables? Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions