Using OpenCV to capture video from camera or video file, then use YOLOv8 TensorRT to detect objects and DeepSORT TensorRT or BYTETrack to track objects.
Support for both NVIDIA dGPU and Jetson devices.
Using OpenCV to capture video from camera or video file, then use YOLOv8 TensorRT to detect objects and DeepSORT TensorRT to track objects.
| Model | Device | FPS |
|---|---|---|
| OpenCV + YOLOv8n + DeepSORT | NVIDIA dGPU GTX 1660Ti 6Gb | ~ |
| OpenCV + YOLOv8n + DeepSORT | NVIDIA Jetson Xavier NX 8Gb | ~ |
| OpenCV + YOLOv8n + DeepSORT | NVIDIA Jetson Orin Nano 8Gb | ~34 |
Test speed of YOLOv8 TensorRT model using trtexec from TensorRT
/usr/src/tensorrt/bin/trtexec on NVIDIA Jetson
batch size = 1
| Model | Device | Throughput (qps) | Latency(ms) |
|---|---|---|---|
yolov8n.engine |
NVIDIA dGPU GTX 1660Ti 6Gb | ~419.742 | ~2.91736 |
yolov8n.engine |
NVIDIA Jetson Xavier NX 8Gb | ~ | ~ |
yolov8n.engine |
NVIDIA Jetson Orin Nano 8Gb | ~137.469 | ~137.469 |
Test speed of DeepSORT TensorRT model using trtexec from TensorRT
/usr/src/tensorrt/bin/trtexec on NVIDIA Jetson
batch size = 1
| Model | Device | Throughput (qps) | Latency(ms) |
|---|---|---|---|
deepsort.engine |
NVIDIA dGPU GTX 1660Ti 6Gb | ~614.738 | ~1.52197 |
deepsort.engine |
NVIDIA Jetson Xavier NX 8Gb | ~ | ~ |
deepsort.engine |
NVIDIA Jetson Orin Nano 8Gb | ~546.135 | ~1.82227 |
- NVIDIA CUDA: 11.4
- NVIDIA TensorRT: 8.5.2
Clone repository and submodules
git clone --recurse-submodules https://github.com/nabang1010/YOLOv8_DeepSORT_TensorRT.gitCreate new enviroment
conda create -n yolov8_ds python=3.8Activate enviroment
conda activate yolov8_dsGo to refs/YOLOv8-TensorRT and install requirements for exporting models
cd refs/YOLOv8-TensorRT
pip3 install -r requirements.txt
pip3 install tensorrt easydict pycuda lap cython_bboxInstall python3-libnvinfer
sudo apt-get install python3-libnvinferDownload YOLOv8 weights from ultralytics here: yolov8n.pt and save in folder models/to_export
Export YOLOv8 ONNX model
In refs/YOLOv8-TensorRT run the following command to export YOLOv8 ONNX model
python3 export-det.py \
--weights ../../models/to_export/yolov8n.pt \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--opset 11 \
--sim \
--input-shape 1 3 640 640 \
--device cuda:0The output .onnx model will be saved in models/to_export folder, move the model to models/onnx folder
mv ../../models/to_export/yolov8n.onnx ../../models/onnx/yolov8n.onnxExport YOLOv8 TensorRT model
In refs/YOLOv8-TensorRT run the following command to export YOLOv8 TensorRT model
python3 build.py \
--weights ../../models/onnx/yolov8n.onnx \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16 \
--device cuda:0The output .engine model will be saved in models/onnx folder, move the model to models/trt folder
mv ../../models/onnx/yolov8n.engine ../../models/engine/yolov8n.engineBuild OpenCV
bash build_opencv.shExport DeepSORT TensorRT model (if use BYTETrack, ignore this step)
Install libeigen3-dev
apt-get install libeigen3-devGo to refs/deepsort_tensorrt and run the following command to build onnx2engine
cd refs/deepsort_tensorrt
mkdir build
cd build
cmake ..
make -j$(nproc)
If catch error
fatal error: Eigen/Core: No such file or directory, replace#include <Eigen/*>with#include <eigen3/Eigen/*>in all files of this repo (datatype.h,kalmanfilter.cpp) and rebuild again.
If catch error
error: looser exception specification on overriding virtual function 'virtual void Logger::log(nvinfer1::ILogger::Severityaddnoexceptbeforeoverrideinlogger.hline 239 and rebuild again.
Run following command to export DeepSORT TensorRT model
./build/onnx2engine ../../models/onnx/deepsort.onnx ../../models/engine/deepsort.engineGo to src folder
cd srcRun YOLOv8 + DeepSORT
python3 yolov8_deepsort_trt.py --show
Run YOLOv8 + BYTETrack
python3 yolov8_bytetrack_trt.py --show
Coming soon