This is the implementation of Deep Reformulated Laplacian Tone Mapping.
- Pycharm 2019
- pretrained vgg16
- checkpoint
- demo tfrecord
- Tensorflow 1.9.0
- Laval Indoor dataset(EULA required)
- Luminance HDR
Download this repo.
- Download pretrained vgg16.npy and place it under '/laplacianet/loss/pretrained/' folder.
- Download the checkpoint(password: 9v3t if required). Unzip it and place all 4 files under '/laplacianet/checkpoint/demo/' folder.
- Download the demo tfrecord(password: mcl0 if required). Unzip it and place it under '/laplacianet/dataset/tfrecord/' folder.
- (optional) Download the WDR image in demo(password: frd0 if required). Unzip it and place it under '/laplacianet/dataset/demo/' folder.
If it requires the app to download the files above, follow the instruction on the prompt window to setup an account.
- Download Pycharm. Go to
File->Openand choose the project where it's downloaded. - Go to
File->Settings. In the prompt window, selectProject:laplacianet->Project Interpreteron the left panel. At the top of the right panel, click thegearicon to add a Python Interpreter with environmentPython 2.7. - In the virtual environment under the same panel, install the following dependencies:
- opencv-python. v 3.4.4.19
- tensorflow-gpu. v 1.9.0
- imageio. v 2.4.1 (need to install plugin to process hdr extension. Use the script
imageio_download_bin freeimagein Pycharm terminal) - easydict. v 1.9
- scipy. v 1.1.0
- matplotlib. v 2.2.3
In Pycharm, run /laplacianet/operation/test.py file.
- Contact the author to request full access to Laval Indoor dataset(~170GB).
- Follow the data preprocessing steps specified on the paper to process the data.
- Generate the label images. Luminance HDR and Photoshop are recommended.
- Divide the data in train set and test set. Place the
.hdrimages oftrain setunder '/laplacianet/dataset/train/hdr/' folder and the corresponding label images created fromstep 3under/laplacianet/dataset/train/ldr/folder. Place the.hdrimages oftest setunder '/laplacianet/dataset/test/hdr/' folder and the corresponding label images created fromstep 3under/laplacianet/dataset/test/ldr/folder. - To start training, in Pycharm, run
/laplacianet/operation/train_high_layer.pyto train the high frequency layer. run/laplacianet/operation/train_bottom_layer.pyto train the low frequency layer. After the 2 layer's training accomplished, run/laplacianet/operation/train_all.pyto fine tune the network. Modify the parameters on the top of the code to specify the layer leveln. run/laplacianet/operation/tfboard.pyfile to monitor training using Tensorboard.
- In
/laplacianet/operation/test.pyfile, modify the parametermodeto'test'. Adjust the parameter levelnas same as the training phase. - run
/laplacianet/operation/test.py.



