Skip to content

Commit 5982f9e

Browse files
[Docs] Refine the documentation (#1994)
* refine doc * refine docs * replace `CLASSES` with `classes` * update doc * Minor fix Co-authored-by: Tai-Wang <tab_wang@outlook.com>
1 parent 4640743 commit 5982f9e

24 files changed

+371
-420
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,7 @@ Please refer to [getting_started.md](docs/en/getting_started.md) for installatio
236236

237237
## Get Started
238238

239-
Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMDetection3D. We provide guidance for quick run [with existing dataset](docs/en/user_guides/train_test.md) and [with customized dataset](docs/en/user_guides/2_new_data_model.md) for beginners. There are also tutorials for [learning configuration systems](docs/en/user_guides/config.md), [adding new dataset](docs/en/advanced_guides/customize_dataset.md), [designing data pipeline](docs/en/user_guides/data_pipeline.md), [customizing models](docs/en/advanced_guides/customize_models.md), [customizing runtime settings](docs/en/advanced_guides/customize_runtime.md) and [Waymo dataset](docs/en/advanced_guides/datasets/waymo_det.md).
239+
Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMDetection3D. We provide guidance for quick run [with existing dataset](docs/en/user_guides/train_test.md) and [with new dataset](docs/en/user_guides/2_new_data_model.md) for beginners. There are also tutorials for [learning configuration systems](docs/en/user_guides/config.md), [customizing dataset](docs/en/advanced_guides/customize_dataset.md), [designing data pipeline](docs/en/user_guides/data_pipeline.md), [customizing models](docs/en/advanced_guides/customize_models.md), [customizing runtime settings](docs/en/advanced_guides/customize_runtime.md) and [Waymo dataset](docs/en/advanced_guides/datasets/waymo_det.md).
240240

241241
Please refer to [FAQ](docs/en/notes/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/notes/compatibility.md) to be aware of the BC-breaking updates introduced in each version.
242242

README_zh-CN.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
<div>&nbsp;</div>
2020
</div>
2121

22-
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdetection3d.readthedocs.io/en/1.1/)
22+
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdetection3d.readthedocs.io/zh_CN/1.1/)
2323
[![badge](https://github.com/open-mmlab/mmdetection3d/workflows/build/badge.svg)](https://github.com/open-mmlab/mmdetection3d/actions)
2424
[![codecov](https://codecov.io/gh/open-mmlab/mmdetection3d/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmdetection3d)
2525
[![license](https://img.shields.io/github/license/open-mmlab/mmdetection3d.svg)](https://github.com/open-mmlab/mmdetection3d/blob/master/LICENSE)
@@ -28,7 +28,7 @@
2828

2929
**v1.1.0rc1** 版本已经在 2022.10.11 发布。
3030

31-
由于坐标系的统一和简化,模型的兼容性会受到影响。目前,大多数模型都以类似的性能对齐了精度,但仍有少数模型在进行基准测试。在接下来的版本中,我们将更新所有的模型权重文件和基准。您可以在[变更日志](docs/zh_cn/notes/changelog.md)[v1.0.x版本变更日志](docs/zh_cn/notes/changelog_v1.0.x.md)中查看更多详细信息。
31+
由于坐标系的统一和简化,模型的兼容性会受到影响。目前,大多数模型都以类似的性能对齐了精度,但仍有少数模型在进行基准测试。在接下来的版本中,我们将更新所有的模型权重文件和基准。您可以在[变更日志](docs/zh_cn/notes/changelog.md)[v1.0.x 版本变更日志](docs/zh_cn/notes/changelog_v1.0.x.md)中查看更多详细信息。
3232

3333
文档:https://mmdetection3d.readthedocs.io/
3434

@@ -50,8 +50,7 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
5050

5151
- **支持户内/户外的数据集**
5252

53-
支持室内/室外的3D检测数据集,包括 ScanNet,SUNRGB-D,Waymo,nuScenes,Lyft,KITTI。
54-
53+
支持室内/室外的 3D 检测数据集,包括 ScanNet,SUNRGB-D,Waymo,nuScenes,Lyft,KITTI。
5554
对于 nuScenes 数据集,我们也支持 [nuImages 数据集](https://github.com/open-mmlab/mmdetection3d/tree/1.1/configs/nuimages)
5655

5756
- **与 2D 检测器的自然整合**
@@ -78,7 +77,7 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
7877

7978
## 更新日志
8079

81-
我们在 2022.10.11 发布了 **1.1.0rc1** 版本.
80+
我们在 2022.10.11 发布了 **1.1.0rc1** 版本
8281

8382
更多细节和版本发布历史可以参考 [changelog.md](docs/zh_cn/notes/changelog.md)
8483

@@ -236,7 +235,7 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
236235

237236
## 快速入门
238237

239-
请参考[快速入门文档](docs/zh_cn/getting_started.md)学习 MMDetection3D 的基本使用。我们为新手提供了分别针对[已有数据集](docs/zh_cn/user_guides/train_test.md)[新数据集](docs/zh_cn/user_guides/2_new_data_model.md)的使用指南。我们也提供了一些进阶教程,内容覆盖了[学习配置文件](docs/zh_cn/user_guides/config.md)[增加数据集支持](docs/zh_cn/advanced_guides/customize_dataset.md)[设计新的数据预处理流程](docs/zh_cn/user_guides/data_pipeline.md)[增加自定义模型](docs/zh_cn/advanced_guides/customize_models.md)[增加自定义的运行时配置](docs/zh_cn/advanced_guides/customize_runtime.md)[Waymo 数据集](docs/zh_cn/advanced_guides/datasets/waymo_det.md)
238+
请参考[快速入门文档](docs/zh_cn/getting_started.md)学习 MMDetection3D 的基本使用。我们为新手提供了分别针对[已有数据集](docs/zh_cn/user_guides/train_test.md)[新数据集](docs/zh_cn/user_guides/2_new_data_model.md)的使用指南。我们也提供了一些进阶教程,内容覆盖了[学习配置文件](docs/zh_cn/user_guides/config.md)[增加自定义数据集](docs/zh_cn/advanced_guides/customize_dataset.md)[设计新的数据预处理流程](docs/zh_cn/user_guides/data_pipeline.md)[增加自定义模型](docs/zh_cn/advanced_guides/customize_models.md)[增加自定义的运行时配置](docs/zh_cn/advanced_guides/customize_runtime.md)[Waymo 数据集](docs/zh_cn/advanced_guides/datasets/waymo_det.md)
240239

241240
请参考 [FAQ](docs/zh_cn/notes/faq.md) 查看一些常见的问题与解答。在升级 MMDetection3D 的版本时,请查看[兼容性文档](docs/zh_cn/notes/compatibility.md)以知晓每个版本引入的不与之前版本兼容的更新。
242241

docs/en/advanced_guides/customize_dataset.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -16,39 +16,39 @@ The ideal situation is that we can reorganize the customized raw data and conver
1616

1717
#### Point cloud Format
1818

19-
Currently, we only support '.bin' format point cloud for training and inference. Before training on your own datasets, you need to convert your point cloud files with other formats to '.bin' files. The common point cloud data formats include `.pcd` and `.las`, we list some open-source tools for reference.
19+
Currently, we only support `.bin` format point cloud for training and inference. Before training on your own datasets, you need to convert your point cloud files with other formats to `.bin` files. The common point cloud data formats include `.pcd` and `.las`, we list some open-source tools for reference.
2020

21-
1. Convert pcd to bin: https://github.com/DanielPollithy/pypcd
21+
1. Convert `.pcd` to `.bin`: https://github.com/DanielPollithy/pypcd
2222

23-
- You can install pypcd with the following command:
23+
- You can install `pypcd` with the following command:
2424

25-
```bash
26-
pip install git+https://github.com/DanielPollithy/pypcd.git
27-
```
25+
```bash
26+
pip install git+https://github.com/DanielPollithy/pypcd.git
27+
```
2828

29-
- You can use the following command to read the pcd file and convert it to bin format and save it:
29+
- You can use the following script to read the `.pcd` file and convert it to `.bin` format and save it:
3030

31-
```python
32-
import numpy as np
33-
from pypcd import pypcd
34-
35-
pcd_data = pypcd.PointCloud.from_path('point_cloud_data.pcd')
36-
points = np.zeros([pcd_data.width, 4], dtype=np.float32)
37-
points[:, 0] = pcd_data.pc_data['x'].copy()
38-
points[:, 1] = pcd_data.pc_data['y'].copy()
39-
points[:, 2] = pcd_data.pc_data['z'].copy()
40-
points[:, 3] = pcd_data.pc_data['intensity'].copy().astype(np.float32)
41-
with open('point_cloud_data.bin', 'wb') as f:
42-
f.write(points.tobytes())
43-
```
31+
```python
32+
import numpy as np
33+
from pypcd import pypcd
4434

45-
2. Convert las to bin: The common conversion path is las -> pcd -> bin, and the conversion from las -> pcd can be achieved through [this tool](https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor).
35+
pcd_data = pypcd.PointCloud.from_path('point_cloud_data.pcd')
36+
points = np.zeros([pcd_data.width, 4], dtype=np.float32)
37+
points[:, 0] = pcd_data.pc_data['x'].copy()
38+
points[:, 1] = pcd_data.pc_data['y'].copy()
39+
points[:, 2] = pcd_data.pc_data['z'].copy()
40+
points[:, 3] = pcd_data.pc_data['intensity'].copy().astype(np.float32)
41+
with open('point_cloud_data.bin', 'wb') as f:
42+
f.write(points.tobytes())
43+
```
44+
45+
2. Convert `.las` to `.bin`: The common conversion path is `.las -> .pcd -> .bin`, and the conversion from `.las -> .pcd` can be achieved through [this tool](https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor).
4646

4747
#### Label Format
4848

4949
The most basic information: 3D bounding box and category label of each scene need to be contained in the annotation `.txt` file. Each line represents a 3D box in a certain scene as follow:
5050

51-
```python
51+
```
5252
# format: [x, y, z, dx, dy, dz, yaw, category_name]
5353
1.23 1.42 0.23 3.96 1.65 1.55 1.56 Car
5454
3.51 2.15 0.42 1.05 0.87 1.86 1.23 Pedestrian

docs/en/advanced_guides/datasets/kitti_det.md

Lines changed: 26 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ mmdetection3d
3232

3333
### Create KITTI dataset
3434

35-
To create KITTI point cloud data, we load the raw point cloud data and generate the relevant annotations including object labels and bounding boxes. We also generate all single training objects' point cloud in KITTI dataset and save them as `.bin` files in `data/kitti/kitti_gt_database`. Meanwhile, `.pkl` info files are also generated for training or validation. Subsequently, create KITTI data by running
35+
To create KITTI point cloud data, we load the raw point cloud data and generate the relevant annotations including object labels and bounding boxes. We also generate all single training objects' point cloud in KITTI dataset and save them as `.bin` files in `data/kitti/kitti_gt_database`. Meanwhile, `.pkl` info files are also generated for training or validation. Subsequently, create KITTI data by running:
3636

3737
```bash
3838
mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets
@@ -98,7 +98,7 @@ kitti
9898
- info\['lidar_points'\]\['Tr_imu_to_velo'\]: Transformation from IMU coordinate to Velodyne coordinate with shape (4, 4).
9999
- info\['instances'\]: It is a list of dict. Each dict contains all annotation information of single instance. For the i-th instance:
100100
- info\['instances'\]\[i\]\['bbox'\]: List of 4 numbers representing the 2D bounding box of the instance, in (x1, y1, x2, y2) order.
101-
- info\['instances'\]\[i\]\['bbox_3d'\]: List of 7 numbers representing the 3D bounding box of the instance, in (x, y, z, w, h, l, yaw) order.
101+
- info\['instances'\]\[i\]\['bbox_3d'\]: List of 7 numbers representing the 3D bounding box of the instance, in (x, y, z, l, h, w, yaw) order.
102102
- info\['instances'\]\[i\]\['bbox_label'\]: An int indicate the 2D label of instance and the -1 indicating ignore.
103103
- info\['instances'\]\[i\]\['bbox_label_3d'\]: An int indicate the 3D label of instance and the -1 indicating ignore.
104104
- info\['instances'\]\[i\]\['depth'\]: Projected center depth of the 3D bounding box with respect to the image plane.
@@ -114,14 +114,15 @@ Please refer to [kitti_converter.py](https://github.com/open-mmlab/mmdetection3d
114114

115115
## Train pipeline
116116

117-
A typical train pipeline of 3D detection on KITTI is as below.
117+
A typical train pipeline of 3D detection on KITTI is as below:
118118

119119
```python
120120
train_pipeline = [
121-
dict(type='LoadPointsFromFile',
122-
coord_type='LIDAR',
123-
load_dim=4, # x, y, z, intensity
124-
use_dim=4),
121+
dict(
122+
type='LoadPointsFromFile',
123+
coord_type='LIDAR',
124+
load_dim=4, # x, y, z, intensity
125+
use_dim=4),
125126
dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True),
126127
dict(type='ObjectSample', db_sampler=db_sampler),
127128
dict(
@@ -180,32 +181,26 @@ aos AP:97.70, 89.11, 87.38
180181

181182
An example to test PointPillars on KITTI with 8 GPUs and generate a submission to the leaderboard is as follows:
182183

183-
- First, you need to modify the `test_evaluator` dict in your config file to add `pklfile_prefix` and `submission_prefix`, just like:
184-
185-
```python
186-
data_root = 'data/kitti/'
187-
test_evaluator = dict(
188-
type='KittiMetric',
189-
ann_file=data_root + 'kitti_infos_test.pkl',
190-
metric='bbox',
191-
pklfile_prefix='results/kitti-3class/kitti_results',
192-
submission_prefix='results/kitti-3class/kitti_results')
193-
```
184+
- First, you need to modify the `test_dataloader` and `test_evaluator` dict in your config file, just like:
185+
186+
```python
187+
data_root = 'data/kitti/'
188+
test_dataloader = dict(
189+
dataset=dict(
190+
ann_file='kitti_infos_test.pkl',
191+
load_eval_anns=False,
192+
data_prefix=dict(pts='testing/velodyne_reduced')))
193+
test_evaluator = dict(
194+
ann_file=data_root + 'kitti_infos_test.pkl',
195+
format_only=True,
196+
pklfile_prefix='results/kitti-3class/kitti_results',
197+
submission_prefix='results/kitti-3class/kitti_results')
198+
```
194199

195200
- And then, you can run the test script.
196201

197-
```shell
198-
mkdir -p results/kitti-3class
199-
200-
./tools/dist_test.sh configs/pointpillars/configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class/latest.pth 8
201-
```
202-
203-
- Or you can use `--cfg-options "test_evaluator.pklfile_prefix=results/kitti-3class/kitti_results" "test_evaluator.submission_prefix=results/kitti-3class/kitti_results"` after the test command, and run test script directly.
204-
205-
```shell
206-
mkdir -p results/kitti-3class
207-
208-
./tools/dist_test.sh configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class/latest.pth 8 --cfg-options 'test_evaluator.pklfile_prefix=results/kitti-3class/kitti_results' 'test_evaluator.submission_prefix=results/kitti-3class/kitti_results'
209-
```
202+
```shell
203+
./tools/dist_test.sh configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py work_dirs/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class/latest.pth 8
204+
```
210205

211206
After generating `results/kitti-3class/kitti_results/xxxxx.txt` files, you can submit these files to KITTI benchmark. Please refer to the [KITTI official website](http://www.cvlibs.net/datasets/kitti/index.php) for more details.

docs/en/advanced_guides/datasets/lyft_det.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,8 @@ Note that we follow the original folder names for clear organization. Please ren
4040

4141
## Dataset Preparation
4242

43-
The way to organize Lyft dataset is similar to nuScenes. We also generate the .pkl and .json files which share almost the same structure.
44-
Next, we will mainly focus on the difference between these two datasets. For a more detailed explanation of the info structure, please refer to [nuScenes tutorial](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md).
43+
The way to organize Lyft dataset is similar to nuScenes. We also generate the `.pkl` files which share almost the same structure.
44+
Next, we will mainly focus on the difference between these two datasets. For a more detailed explanation of the info structure, please refer to [nuScenes tutorial](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/docs/en/advanced_guides/datasets/nuscenes_det.md).
4545

4646
To prepare info files for Lyft, run the following commands:
4747

@@ -90,7 +90,7 @@ mmdetection3d
9090
- info\['lidar_points'\]\['num_pts_feats'\]: The feature dimension of point.
9191
- info\['lidar_points'\]\['lidar2ego'\]: The transformation matrix from this lidar sensor to ego vehicle. (4x4 list)
9292
- info\['lidar_points'\]\['ego2global'\]: The transformation matrix from the ego vehicle to global coordinates. (4x4 list)
93-
- info\['lidar_sweeps'\]: A list contains sweeps information (The intermediate lidar frames without annotations)
93+
- info\['lidar_sweeps'\]: A list contains sweeps information (The intermediate lidar frames without annotations).
9494
- info\['lidar_sweeps'\]\[i\]\['lidar_points'\]\['data_path'\]: The lidar data path of i-th sweep.
9595
- info\['lidar_sweeps'\]\[i\]\['lidar_points'\]\['lidar2ego'\]: The transformation matrix from this lidar sensor to ego vehicle in i-th sweep timestamp
9696
- info\['lidar_sweeps'\]\[i\]\['lidar_points'\]\['ego2global'\]: The transformation matrix from the ego vehicle in i-th sweep timestamp to global coordinates. (4x4 list)
@@ -111,11 +111,11 @@ mmdetection3d
111111

112112
Next, we will elaborate on the difference compared to nuScenes in terms of the details recorded in these info files.
113113

114-
- without `lyft_database/xxxxx.bin`: This folder and `.bin` files are not extracted on the Lyft dataset due to the negligible effect of ground-truth sampling in the experiments.
114+
- Without `lyft_database/xxxxx.bin`: This folder and `.bin` files are not extracted on the Lyft dataset due to the negligible effect of ground-truth sampling in the experiments.
115115

116116
- `lyft_infos_train.pkl`:
117117

118-
- Without info\['instances'\]\[i\]\['velocity'\], There is no velocity measurement on Lyft.
118+
- Without info\['instances'\]\[i\]\['velocity'\]: There is no velocity measurement on Lyft.
119119
- Without info\['instances'\]\[i\]\['num_lidar_pts'\] and info\['instances'\]\['num_radar_pts'\]
120120

121121
Here we only explain the data recorded in the training info files. The same applies to the validation set and test set (without instances).
@@ -160,7 +160,7 @@ where the first 3 dimensions refer to point coordinates, and the last refers to
160160

161161
## Evaluation
162162

163-
An example to evaluate PointPillars with 8 GPUs with Lyft metrics is as follows.
163+
An example to evaluate PointPillars with 8 GPUs with Lyft metrics is as follows:
164164

165165
```shell
166166
bash ./tools/dist_test.sh configs/pointpillars/pointpillars_hv_fpn_sbn-all_8xb2-2x_lyft-3d.py checkpoints/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210517_202818-fc6904c3.pth 8

0 commit comments

Comments
 (0)