Skip to content

Commit cb7c679

Browse files
VVssssskZwwWayne
authored andcommitted
[Benchmark] Add PV RCNN benchmark (#2045)
* fix a bug * fix a batch inference bug * fix docs * add pvrcnn benchmark * fix * add link * add * fix lint
1 parent c543b48 commit cb7c679

File tree

6 files changed

+78
-1
lines changed

6 files changed

+78
-1
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
159159
<li><a href="configs/point_rcnn">PointRCNN (CVPR'2019)</a></li>
160160
<li><a href="configs/parta2">Part-A2 (TPAMI'2020)</a></li>
161161
<li><a href="configs/centerpoint">CenterPoint (CVPR'2021)</a></li>
162+
<li><a href="configs/pv_rcnn">PV-RCNN (CVPR'2020)</a></li>
162163
</ul>
163164
<li><b>Indoor</b></li>
164165
<ul>
@@ -227,6 +228,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
227228
| MonoFlex |||||||||||
228229
| SA-SSD |||||||||||
229230
| FCAF3D |||||||||||
231+
| PV-RCNN |||||||||||
230232

231233
**Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase.
232234

configs/pv_rcnn/README.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection
2+
3+
> [PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection](https://arxiv.org/abs/1912.13192)
4+
5+
<!-- [ALGORITHM] -->
6+
7+
## Introduction
8+
9+
3D object detection has been receiving increasing attention from both industry and academia thanks to its wide applications in various fields such as autonomous driving and robotics. LiDAR sensors are widely adopted in autonomous driving vehicles and robots for capturing 3D scene information as sparse and irregular point clouds, which provide vital cues for 3D scene perception and understanding. In this paper, we propose to achieve high performance 3D object detection by designing novel point-voxel integrated networks to learn better 3D features from irregular point clouds.
10+
11+
<div align=center>
12+
<img src="https://user-images.githubusercontent.com/88368822/202114244-ccf52f56-b8c9-4f1b-9cc2-80c7a9952c99.png" width="800"/>
13+
</div>
14+
15+
## Results and models
16+
17+
### KITTI
18+
19+
| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download |
20+
| :---------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
21+
| [SECFPN](./pv_rcnn_8xb2-80e_kitti-3d-3class.py) | 3 Class | cyclic 80e | 5.4 | | 72.28 | [model](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class/pv_rcnn_8xb2-80e_kitti-3d-3class_20221117_234428-b384d22f.pth) \\ [log](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class/pv_rcnn_8xb2-80e_kitti-3d-3class_20221117_234428.json) |
22+
23+
Note: mAP represents AP11 results on 3 Class under the moderate setting.
24+
25+
Detailed performance on KITTI 3D detection (3D) is as follows, evaluated by AP11 metric:
26+
27+
| | Easy | Moderate | Hard |
28+
| ---------- | :---: | :------: | :---: |
29+
| Car | 89.20 | 83.72 | 78.79 |
30+
| Pedestrian | 66.64 | 59.84 | 55.33 |
31+
| Cyclist | 87.25 | 73.27 | 69.61 |
32+
33+
## Citation
34+
35+
```latex
36+
@article{ShaoshuaiShi2020PVRCNNPF,
37+
title={PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection},
38+
author={Shaoshuai Shi and Chaoxu Guo and Li Jiang and Zhe Wang and Jianping Shi and Xiaogang Wang and Hongsheng Li},
39+
journal={computer vision and pattern recognition},
40+
year={2020}
41+
}
42+
```

configs/pv_rcnn/metafile.yml

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
Collections:
2+
- Name: PV-RCNN
3+
Metadata:
4+
Training Data: KITTI
5+
Training Techniques:
6+
- AdamW
7+
Training Resources: 8x A100 GPUs
8+
Architecture:
9+
- Feature Pyramid Network
10+
Paper:
11+
URL: https://arxiv.org/abs/1912.13192
12+
Title: 'PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection'
13+
README: configs/pv_rcnn/README.md
14+
Code:
15+
URL: https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/models/detectors/pv_rcnn.py#L12
16+
Version: v1.1.0rc2
17+
18+
Models:
19+
- Name: pv_rcnn_8xb2-80e_kitti-3d-3class
20+
In Collection: PV-RCNN
21+
Config: configs/pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class.py
22+
Metadata:
23+
Training Memory (GB): 5.4
24+
Results:
25+
- Task: 3D Object Detection
26+
Dataset: KITTI
27+
Metrics:
28+
mAP: 72.28
29+
Weights: <https://download.openmmlab.com/mmdetection3d/v1.1.0_models/pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class/pv_rcnn_8xb2-80e_kitti-3d-3class_20221117_234428-b384d22f.pth

docs/en/model_zoo.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,10 @@ Please refer to [SA-SSD](https://github.com/open-mmlab/mmdetection3d/blob/master
108108

109109
Please refer to [FCAF3D](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/fcaf3d) for details. We provide FCAF3D baselines on the ScanNet, S3DIS, and SUN RGB-D datasets.
110110

111+
### PV-RCNN
112+
113+
Please refer to [PV-RCNN](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/pv_rcnn) for details. We provide PV-RCNN baselines on the KITTI dataset.
114+
111115
### Mixed Precision (FP16) Training
112116

113117
Please refer to [Mixed Precision (FP16) Training on PointPillars](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py) for details.

tests/test_models/test_detectors/test_pvrcnn.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ def test_pvrcnn(self):
1717
DefaultScope.get_instance('test_pvrcnn', scope_name='mmdet3d')
1818
setup_seed(0)
1919
pvrcnn_cfg = get_detector_cfg(
20-
'pvrcnn/pvrcnn_8xb2-80e_kitti-3d-3class.py')
20+
'pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class.py')
2121
model = MODELS.build(pvrcnn_cfg)
2222
num_gt_instance = 2
2323
packed_inputs = create_detector_inputs(num_gt_instance=num_gt_instance)

0 commit comments

Comments
 (0)