You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -236,7 +236,7 @@ Please refer to [getting_started.md](docs/en/getting_started.md) for installatio
236
236
237
237
## Get Started
238
238
239
-
Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMDetection3D. We provide guidance for quick run [with existing dataset](docs/en/user_guides/train_test.md) and [with customized dataset](docs/en/user_guides/2_new_data_model.md) for beginners. There are also tutorials for [learning configuration systems](docs/en/user_guides/config.md), [adding new dataset](docs/en/advanced_guides/customize_dataset.md), [designing data pipeline](docs/en/user_guides/data_pipeline.md), [customizing models](docs/en/advanced_guides/customize_models.md), [customizing runtime settings](docs/en/advanced_guides/customize_runtime.md) and [Waymo dataset](docs/en/advanced_guides/datasets/waymo_det.md).
239
+
Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMDetection3D. We provide guidance for quick run [with existing dataset](docs/en/user_guides/train_test.md) and [with new dataset](docs/en/user_guides/2_new_data_model.md) for beginners. There are also tutorials for [learning configuration systems](docs/en/user_guides/config.md), [customizing dataset](docs/en/advanced_guides/customize_dataset.md), [designing data pipeline](docs/en/user_guides/data_pipeline.md), [customizing models](docs/en/advanced_guides/customize_models.md), [customizing runtime settings](docs/en/advanced_guides/customize_runtime.md) and [Waymo dataset](docs/en/advanced_guides/datasets/waymo_det.md).
240
240
241
241
Please refer to [FAQ](docs/en/notes/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/notes/compatibility.md) to be aware of the BC-breaking updates introduced in each version.
Copy file name to clipboardExpand all lines: docs/en/advanced_guides/customize_dataset.md
+22-22Lines changed: 22 additions & 22 deletions
Original file line number
Diff line number
Diff line change
@@ -16,39 +16,39 @@ The ideal situation is that we can reorganize the customized raw data and conver
16
16
17
17
#### Point cloud Format
18
18
19
-
Currently, we only support '.bin' format point cloud for training and inference. Before training on your own datasets, you need to convert your point cloud files with other formats to '.bin' files. The common point cloud data formats include `.pcd` and `.las`, we list some open-source tools for reference.
19
+
Currently, we only support `.bin` format point cloud for training and inference. Before training on your own datasets, you need to convert your point cloud files with other formats to `.bin` files. The common point cloud data formats include `.pcd` and `.las`, we list some open-source tools for reference.
20
20
21
-
1. Convert pcd to bin: https://github.com/DanielPollithy/pypcd
21
+
1. Convert `.pcd` to `.bin`: https://github.com/DanielPollithy/pypcd
22
22
23
-
- You can install pypcd with the following command:
23
+
- You can install `pypcd` with the following command:
2. Convert las to bin: The common conversion path is las -> pcd -> bin, and the conversion from las -> pcd can be achieved through [this tool](https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor).
2. Convert `.las` to `.bin`: The common conversion path is `.las -> .pcd -> .bin`, and the conversion from `.las -> .pcd` can be achieved through [this tool](https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor).
46
46
47
47
#### Label Format
48
48
49
49
The most basic information: 3D bounding box and category label of each scene need to be contained in the annotation `.txt` file. Each line represents a 3D box in a certain scene as follow:
50
50
51
-
```python
51
+
```
52
52
# format: [x, y, z, dx, dy, dz, yaw, category_name]
Copy file name to clipboardExpand all lines: docs/en/advanced_guides/datasets/kitti_det.md
+26-31Lines changed: 26 additions & 31 deletions
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ mmdetection3d
32
32
33
33
### Create KITTI dataset
34
34
35
-
To create KITTI point cloud data, we load the raw point cloud data and generate the relevant annotations including object labels and bounding boxes. We also generate all single training objects' point cloud in KITTI dataset and save them as `.bin` files in `data/kitti/kitti_gt_database`. Meanwhile, `.pkl` info files are also generated for training or validation. Subsequently, create KITTI data by running
35
+
To create KITTI point cloud data, we load the raw point cloud data and generate the relevant annotations including object labels and bounding boxes. We also generate all single training objects' point cloud in KITTI dataset and save them as `.bin` files in `data/kitti/kitti_gt_database`. Meanwhile, `.pkl` info files are also generated for training or validation. Subsequently, create KITTI data by running:
- Or you can use `--cfg-options "test_evaluator.pklfile_prefix=results/kitti-3class/kitti_results" "test_evaluator.submission_prefix=results/kitti-3class/kitti_results"` after the test command, and run test script directly.
After generating `results/kitti-3class/kitti_results/xxxxx.txt` files, you can submit these files to KITTI benchmark. Please refer to the [KITTI official website](http://www.cvlibs.net/datasets/kitti/index.php) for more details.
Copy file name to clipboardExpand all lines: docs/en/advanced_guides/datasets/lyft_det.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -40,8 +40,8 @@ Note that we follow the original folder names for clear organization. Please ren
40
40
41
41
## Dataset Preparation
42
42
43
-
The way to organize Lyft dataset is similar to nuScenes. We also generate the .pkl and .json files which share almost the same structure.
44
-
Next, we will mainly focus on the difference between these two datasets. For a more detailed explanation of the info structure, please refer to [nuScenes tutorial](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md).
43
+
The way to organize Lyft dataset is similar to nuScenes. We also generate the `.pkl` files which share almost the same structure.
44
+
Next, we will mainly focus on the difference between these two datasets. For a more detailed explanation of the info structure, please refer to [nuScenes tutorial](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docs/en/advanced_guides/datasets/nuscenes_det.md).
45
45
46
46
To prepare info files for Lyft, run the following commands:
47
47
@@ -90,7 +90,7 @@ mmdetection3d
90
90
- info\['lidar_points'\]\['num_pts_feats'\]: The feature dimension of point.
91
91
- info\['lidar_points'\]\['lidar2ego'\]: The transformation matrix from this lidar sensor to ego vehicle. (4x4 list)
92
92
- info\['lidar_points'\]\['ego2global'\]: The transformation matrix from the ego vehicle to global coordinates. (4x4 list)
93
-
- info\['lidar_sweeps'\]: A list contains sweeps information (The intermediate lidar frames without annotations)
93
+
- info\['lidar_sweeps'\]: A list contains sweeps information (The intermediate lidar frames without annotations).
94
94
- info\['lidar_sweeps'\]\[i\]\['lidar_points'\]\['data_path'\]: The lidar data path of i-th sweep.
95
95
- info\['lidar_sweeps'\]\[i\]\['lidar_points'\]\['lidar2ego'\]: The transformation matrix from this lidar sensor to ego vehicle in i-th sweep timestamp
96
96
- info\['lidar_sweeps'\]\[i\]\['lidar_points'\]\['ego2global'\]: The transformation matrix from the ego vehicle in i-th sweep timestamp to global coordinates. (4x4 list)
@@ -111,11 +111,11 @@ mmdetection3d
111
111
112
112
Next, we will elaborate on the difference compared to nuScenes in terms of the details recorded in these info files.
113
113
114
-
-without`lyft_database/xxxxx.bin`: This folder and `.bin` files are not extracted on the Lyft dataset due to the negligible effect of ground-truth sampling in the experiments.
114
+
-Without`lyft_database/xxxxx.bin`: This folder and `.bin` files are not extracted on the Lyft dataset due to the negligible effect of ground-truth sampling in the experiments.
115
115
116
116
-`lyft_infos_train.pkl`:
117
117
118
-
- Without info\['instances'\]\[i\]\['velocity'\], There is no velocity measurement on Lyft.
118
+
- Without info\['instances'\]\[i\]\['velocity'\]: There is no velocity measurement on Lyft.
119
119
- Without info\['instances'\]\[i\]\['num_lidar_pts'\] and info\['instances'\]\['num_radar_pts'\]
120
120
121
121
Here we only explain the data recorded in the training info files. The same applies to the validation set and test set (without instances).
@@ -160,7 +160,7 @@ where the first 3 dimensions refer to point coordinates, and the last refers to
160
160
161
161
## Evaluation
162
162
163
-
An example to evaluate PointPillars with 8 GPUs with Lyft metrics is as follows.
163
+
An example to evaluate PointPillars with 8 GPUs with Lyft metrics is as follows:
0 commit comments