Pytorch implementation of the paper "DLFNet: Multi-Scale Dynamic Weighted Lane Feature Network for Complex Scenes"(ICIC2025).
- DLFNet is based on the BiFPN concept and the way humans perceive and reason about lane lines in the real world, achieving the integration of global semantic information with local feature details.
- In culane and tusimple, the performance is superior , especially at high IOU threshold
Only test on Ubuntu18.04 and 20.04 with:
- Python >= 3.8 (tested with Python3.8)
- PyTorch >= 1.6 (tested with Pytorch1.8)
- CUDA (tested with cuda11.1)
- Other dependencies described in
requirements.txt
Clone this code to your workspace.
We call this directory as $DLFNET_ROOT
git clone https://github.com/EADMO/DLFNet.gitconda create -n dlfnet python=3.8 -y
conda activate dlfnet# Install pytorch firstly, the cudatoolkit version should be same in your system.
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
# Install python packages
python setup.py build developDownload CULane. Then extract them to $CULANEROOT. Create link to data directory.
cd $DLFNET_ROOT
mkdir -p data
ln -s $CULANEROOT data/CULaneFor CULane, you should have structure like this:
$CULANEROOT/driver_xx_xxframe # data folders x6
$CULANEROOT/laneseg_label_w16 # lane segmentation labels
$CULANEROOT/list # data lists
Download Tusimple. Then extract them to $TUSIMPLEROOT. Create link to data directory.
cd $DLFNET_ROOT
mkdir -p data
ln -s $TUSIMPLEROOT data/tusimpleFor Tusimple, you should have structure like this:
$TUSIMPLEROOT/clips # data folders
$TUSIMPLEROOT/lable_data_xxxx.json # label json file x4
$TUSIMPLEROOT/test_tasks_0627.json # test tasks json file
$TUSIMPLEROOT/test_label.json # test label json file
For Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation.
python tools/generate_seg_tusimple.py --root $TUSIMPLEROOT
# this will generate seg_label directoryFor training, run
python main.py [configs/path_to_your_config] --gpus [gpu_num]For example, run
python main.py configs/resnet18_culane.py --gpus 0For testing, run
python main.py [configs/path_to_your_config] --[test|validate] --load_from [path_to_your_model] --gpus [gpu_num]For example, run
python main.py configs/dla34_culane.py --validate --load_from culane_dla34.pth --gpus 0At present, this code can output the visual results and GT . You only need to add add --view or --view_gt
We will get the visualization result in work_dirs/xxx/xxx/visualization.
For generate DEMO, run
python generate_DEMO.pyReplace image_folder with your visual path
| Backbone | mF1 | F1@50 | F1@75 |
|---|---|---|---|
| ResNet-18 | 55.35 | 79.77 | 62.64 |
| ResNet-34 | 55.14 | 79.77 | 62.83 |
| ResNet-101 | 55.86 | 80.18 | 63.43 |
| DLA-34 | 56.25 | 80.45 | 63.56 |
| Backbone | F1 | Acc | FDR | FNR |
|---|---|---|---|---|
| ResNet-18 | 97.91 | 96.88 | 2.54 | 1.62 |
| ResNet-34 | 97.82 | 96.92 | 2.56 | 1.79 |
| ResNet-101 | 97.89 | 96.77 | 1.81 | 2.41 |

