Skip to content

DCVL-3D/SSS_release

Repository files navigation

💡 Shape-Selective Splatting: Regularizing the Shape of Gaussian for Sparse-View Rendering

This repository provides the official PyTorch implementation of the paper "Shape-Selective Splatting: Regularizing the Shape of Gaussian for Sparse-View Rendering"

Gun Ryu and Wonjun Kim (Corresponding Author)

IEEE Signal Processing Letters, vol. 32, pp. 3172–3176, 2025.


📦 Installation

🛠 Environment Setup

Install the required dependencies via conda:

conda env create -f environment.yml
conda activate sss

🗂️ Dataset preparation

In the data preparation stage, we first reconstruct sparse-view inputs using Structure-from-Motion (SfM) with the provided camera poses from the datasets. Then, we perform dense stereo matching using COLMAP’s patch_match_stereo function, followed by stereo_fusion to generate the dense stereo point cloud.

Setup Instructions
mkdir dataset
cd dataset

# Download LLFF dataset
gdown 16VnMcF1KJYxN9QId6TClMsZRahHNMW5g

# Generate sparse point cloud using COLMAP (limited views) for LLFF
python tools/colmap_llff.py

# Download MipNeRF-360 dataset
wget http://storage.googleapis.com/gresearch/refraw360/360_v2.zip
unzip -d mipnerf360 360_v2.zip

# Generate sparse point cloud using COLMAP (limited views) for MipNeRF-360
python tools/colmap_360.py

We also provide preprocessed sparse and dense point clouds for convenience. You can download them via the link below:

👉 Download Preprocessed Point Clouds

Furthermore, we estimate monocular depth using the method described in Fine-Tuning Image-Conditional Diffusion Models. Please ensure that the resulting depth .npy files are saved in the depth_npy_{resolution}/ directory of the dataset. These files must be located in the same directory as the original images, and their filenames should match the original image filenames, with the suffix _pred.npy appended.

Example layout for “fern” scene (8× downsampled)
fern/
├── images/
│   ├ IMG_4043.JPG
│   ├ IMG_4044.JPG
│   ├ IMG_4045.JPG
│   └ … other `.JPG` files
├── sparse/
├── dense/
└── depth_npy_8/
    ├ IMG_4043_pred.npy
    ├ IMG_4044_pred.npy
    ├ IMG_4045_pred.npy
    └ … other `{image_name}_pred.npy` files

🏋️‍♂️ Training

🖼️ LLFF Dataset

To train on a single LLFF scene, use the following command:

python train.py -s ${DATASET_PATH} -m ${OUTPUT_PATH} --eval -r 8 --n_views {3 or 6 or 9}

🖼️ MipNeRF-360 Dataset

To train on a single MipNeRF-360 scene, use the following command:

python train.py -s ${DATASET_PATH} -m ${OUTPUT_PATH} --eval -r 8 --n_views {12 or 24}

🎥 Rendering

You can render a target scene using the following command:

🖼️ LLFF Dataset

python render.py -s ${DATASET_PATH} -m ${MODEL_PATH} --eval -r 8 --iteration 10000

🖼️ MipNeRF-360 Dataset

python render.py -s ${DATASET_PATH} -m ${MODEL_PATH} --eval -r 8 --iteration 10000

📊 Evaluation

You can evaluate the model performance using the following command:

🖼️ LLFF Dataset

python metrics.py --model_paths ${MODEL_PATH}

🖼️ MipNeRF-360 Dataset

python metrics.py --model_paths ${MODEL_PATH}

🧪 Experimental Results

✨ Qualitative Results

Qualitative Results


📎 Citation

If you find this work helpful, please consider citing:

@ARTICLE{ryu2025sss,
  author={Ryu, Gun and Kim, Wonjun},
  journal={IEEE Signal Processing Letters}, 
  title={Shape-Selective Splatting: Regularizing the Shape of Gaussian for Sparse-View Rendering},
  year={2025},
  volume={32},
  pages={3172-3176},
  doi={10.1109/LSP.2025.3596225}
}

📫 Contact

If you have any questions or issues, feel free to reach out:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •