This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks

Project Page | Video | Presentation | Paper | Data

Licensing

The majority of this project is licensed under CC-BY-NC, except for adapted third-party code, which is available under separate license terms:

  • nerf is licensed under the MIT license
  • nerf-pytorch is licensed under the MIT license
  • FLIP is licensed under the BSD-3 license
  • Python-IW-SSIM is licensed under the BSD license

General

This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks", as well as a customized/partial port of the nerf-pytorch codebase by Yen-Chen Lin.

The codebase has been tested on Ubuntu 20.04 using an RTX2080TI with 11 GB of VRAM, and should also work on other distributions, as well as Windows, although it was not regularly tested on Windows. Long file paths generated for experiments might cause issues on Windows, so we recommend to use a very shallow output folder (such as D:/logs or similar).

Repo Structure

configs/ contains example configuration files to get started with experiments.

src/ contains the pytorch training/inference framework that handles training of all supported network types.

requirements.txt lists the required python packages for the code base. We recommend conda to setup the development environment. Note that PyTorch 1.8 is the minimum working version due to earlier versions having issues with the parallel dataloaders.

Datasets

Our datasets follow a similar format as in the original NeRF code repository, where we read .json files containing the camera poses, as well as images (and depth maps) for each image from various directories.

The dataset can be found at https://repository.tugraz.at/records/jjs3x-4f133.

Training / Example Commands

To train a network with a given configuration file, you can adapt the following examplary command, executed from within the src/ directory. All things in angle brackets need to be replaced by specific values depending on your use case, please refer to src/util/config.py for all valid configutation options. All configuration options can also be supplied via the command line.

The following basic command trains a DONeRF with 2 samples per ray, where the oracle network is trained for 300000 iterations first, and the shading network for 300000 iterations afterwards.

python train.py -c ../configs/DONeRF_2_samples.ini --data <PATH_TO_DATASET_DIRECTORY> --logDir <PATH_TO_OUTPUT_DIRECTORY> 

A specific CUDA device can be chosen for training by supplying the --device argument:

python train.py -c ../configs/DONeRF_2_samples.ini --data <PATH_TO_DATASET_DIRECTORY> --logDir <PATH_TO_OUTPUT_DIRECTORY> --device <DEVICE_ID>

By default, our dataloader loads images on-demand by using 8 parallel workers. To store all data on the GPU at all times (for faster training), supply the --storeFullData argument:

python train.py -c ../configs/DONeRF_2_samples.ini --data <PATH_TO_DATASET_DIRECTORY> --logDir <PATH_TO_OUTPUT_DIRECTORY> --device <DEVICE_ID> --storeFullData

A complete example command that trains a DONeRF with 8 samples per ray on the classroom dataset using the CUDA Device 0, storing the outputs in /data/output_results/ could look like this:

python train.py -c ../configs/DONeRF_2_samples.ini --data /data/classroom/ --logDir /data/output_results/ --device 0 --storeFullData --numRayMarchSamples 8 --numRayMarchSamples 8

(Important to note here is that we pass numRayMarchSamples twice - the first value is actually ignored since the first network in this particular config file does not use raymarching, and certain config options are specified per network.)

Testing / Example Commands

By default, the framework produces rendered output image every epochsRender iterations validates on the validation set every epochsValidate iterations.

Videos can be generated by supplying json paths for the poses, and epochsVideo will produce a video from a predefined path at regular intervals.

For running just an inference pass for all the test images and for a given video path, you can use src/test.py.

This also takes the same arguments and configuration files as src/train.py does, so following the example for the training command, you can use src/test.py as follows:

python train.py -c ../configs/DONeRF_2_samples.ini --data /data/classroom/ --logDir /data/output_results/ --device 0 --storeFullData --numRayMarchSamples 8 --numRayMarchSamples 8 --camPath cam_path_rotate --outputVideoName cam_path_rotate --videoFrames 300

Evaluation

To generate quantitative results (and also output images/videos/diffs similar to what src/test.py can also do), you can use src/evaluate.py. To directly evaluate after training, supply the --performEvaluation flag to any training command. This script only requires the --data and --logDir options to locate the results of the training procedure, and has some additional evaluation-specific options that can be inspected at the top of def main() (such as being able to skip certain evaluation procedures or only evaluate specific things).

src/evaluate.py performs the evaluation on all subdirectories (if it hasn't done so already), so you only need to run this script once for a specific dataset and all containing results are evaluated sequentially.

To aggregate the resulting outputs (MSE, SSIM, FLIP, FLOP / Pixel, Number of Parameters), you can use src/comparison.py to generate a resulting .csv file.

Citation

If you find this repository useful in any way or use/modify DONeRF in your research, please consider citing our paper:

@article{neff2021donerf,
author = {Neff, T. and Stadlbauer, P. and Parger, M. and Kurz, A. and Mueller, J. H. and Chaitanya, C. R. A. and Kaplanyan, A. and Steinberger, M.},
title = {DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks},
journal = {Computer Graphics Forum},
volume = {40},
number = {4},
pages = {45-59},
keywords = {CCS Concepts, • Computing methodologies → Rendering},
doi = {https://doi.org/10.1111/cgf.14340},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14340},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14340},
abstract = {Abstract The recent research explosion around implicit neural representations, such as NeRF, shows that there is immense potential for implicitly storing high-quality scene and lighting information in compact neural networks. However, one major limitation preventing the use of NeRF in real-time rendering applications is the prohibitive computational cost of excessive network evaluations along each view ray, requiring dozens of petaFLOPS. In this work, we bring compact neural representations closer to practical rendering of synthetic content in real-time applications, such as games and virtual reality. We show that the number of samples required for each view ray can be significantly reduced when samples are placed around surfaces in the scene without compromising image quality. To this end, we propose a depth oracle network that predicts ray sample locations for each view ray with a single network evaluation. We show that using a classification network around logarithmically discretized and spherically warped depth values is essential to encode surface locations rather than directly estimating depth. The combination of these techniques leads to DONeRF, our compact dual network design with a depth oracle network as its first step and a locally sampled shading network for ray accumulation. With DONeRF, we reduce the inference costs by up to 48× compared to NeRF when conditioning on available ground truth depth information. Compared to concurrent acceleration methods for raymarching-based neural representations, DONeRF does not require additional memory for explicit caching or acceleration structures, and can render interactively (20 frames per second) on a single GPU.},
year = {2021}
}
Comments
  • specify the compatible versions of ptflops

    specify the compatible versions of ptflops

    As the add_flops_counting_methods() has been removed from flops_counter after version 0.6.7. The compatible versions of ptflops should be specified.

    And the requirements.txt has been tested to be correct.

  • pin error

    pin error

    Any idea what this problem might be? @thomasneff Using the code out of the box with pavillon dataset for training.

    WARNING! - import of cuda kernels for 'disc_depth_multiclass' failed - falling back to PyTorch
    Training config: lo_l1.0_0.0_SpPoDir[128]-relu0(256x8)-CD-128-5-5_l1.0_10.0_RayMarchFromPoses_nSD[2_LSfCD_128_0.0](nerf(10-4))-relu1(256x80..63-7.63.)-RGBARayMarch (../configs/DONeRF_2_samples.ini)
    no Checkpoints found
    no Checkpoints found
    epoch=21         loss=0.13672528:   0%|                                                                                    | 20/300001 [00:03<7:07:22, 11.70it/s]
    Exception in thread Thread-4:
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/threading.py", line 932, in _bootstrap_inner
        self.run()
      File "/opt/conda/lib/python3.8/threading.py", line 870, in run
        self._target(*self._args, **self._kwargs)
      File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/pin_memory.py", line 28, in _pin_memory_loop
        idx, data = r
    ValueError: not enough values to unpack (expected 2, got 0)
    epoch=21         loss=0.13672528:   0%|                                                                                   | 21/300001 [00:08<34:03:39,  2.45it/s]
    Traceback (most recent call last):
      File "train.py", line 344, in <module>
        main()
      File "train.py", line 329, in main
        pre_train(train_config)
      File "train.py", line 136, in pre_train
        batch_iterator = iter(train_config.train_data_loader)
      File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 349, in __iter__
        self._iterator._reset(self)
      File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 852, in _reset
        data = self._get_data()
      File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1029, in _get_data
        raise RuntimeError('Pin memory thread exited unexpectedly')
    RuntimeError: Pin memory thread exited unexpectedly
    
  • Adding Code of Conduct file

    Adding Code of Conduct file

    This is pull request was created automatically because we noticed your project was missing a Code of Conduct file.

    Code of Conduct files facilitate respectful and constructive communities by establishing expected behaviors for project contributors.

    This PR was crafted with love by Facebook's Open Source Team.

  • Adding Contributing file

    Adding Contributing file

    This is pull request was created automatically because we noticed your project was missing a Contributing file.

    CONTRIBUTING files explain how a developer can contribute to the project - which you should actively encourage.

    This PR was crafted with love by Facebook's Open Source Team.

  • How to use the dataset's transforms?

    How to use the dataset's transforms?

    Thank you very much for your excellent work! While use your dataset, how can i get every image's camera to world pose?

    How do I apply the transform I get from the .json? Do I need to convert from blender coordinate system to opencv coordinate system like nerf?

  • Added TensorRT+CUDA real-time viewer prototype and updated licenses to reflect that

    Added TensorRT+CUDA real-time viewer prototype and updated licenses to reflect that

    Included real-time viewer prototype code in the subdirectory tensorrt_real_time_viewer . This is the prototype implementation that we used to generate the performance numbers (and real-time videos) for our paper submission last year. Expect warnings, potentially bugs and/or other issues as this is not intended to be a particularly reliable or optimized implementation.

  • Training/testing the model over a colmap output

    Training/testing the model over a colmap output

    Hey authors, great work! I was able to run the stock examples (forest for example). How do I train this model over a colmap output? I can't find a way to get the .json format for the camera parameters etc. I see a similar issue here: https://github.com/facebookresearch/nonrigid_nerf/issues/7

  • import of kernels failed

    import of kernels failed

    WARNING! - import of cuda kernels for 'disc_depth_multiclass' failed - falling back to PyTorch
    

    @thomasneff is there something missing in the repo? How did you make those kernels work?

  • Colab notebook

    Colab notebook

    Thanks for sharing this fantastic work. Would you consider releasing a Google Colab notebook for less technically inclined people like myself to to try this out? Thanks in advance!

Related tags
This repository contains the source code of our work on designing efficient CNNs for computer vision
This repository contains the source code of our work on designing efficient CNNs for computer vision

Efficient networks for Computer Vision This repo contains source code of our work on designing efficient networks for different computer vision tasks:

Jul 29, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"
This repository contains the code for the paper

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Jul 20, 2022
Aug 1, 2022
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."
This repository contains a re-implementation of the code for the CVPR 2021 paper

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Jul 26, 2022
This repository contains the code and models for the following paper.
This repository contains the code and models for the following paper.

DC-ShadowNet Introduction This is an implementation of the following paper DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised

Jul 27, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Jul 31, 2022
This repository contains the code for the CVPR 2021 paper "GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields"
This repository contains the code for the CVPR 2021 paper

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields Project Page | Paper | Supplementary | Video | Slides | Blog | Talk If

Aug 2, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
 This repository contains the code for the CVPR 2020 paper

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

Jul 31, 2022
This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider
This repository contains the code for

SBEVNet: End-to-End Deep Stereo Layout Estimation This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by D

Jun 2, 2022
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Nov 9, 2021
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"
This repository contains the code for the paper

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Jul 28, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
This repository contains the code and models necessary to replicate the results of paper:  How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

Mar 17, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
This repository contains the code and models necessary to replicate the results of paper:  How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

Mar 17, 2022
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" ([email protected])
This repository contains the data and code for the paper

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Jun 27, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Nov 30, 2021
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

Jul 27, 2022
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Jul 3, 2022
This repository contains the implementation of the paper: "Towards Frequency-Based Explanation for Robust CNN"
This repository contains the implementation of the paper:

RobustFreqCNN About This repository contains the implementation of the paper "Towards Frequency-Based Explanation for Robust CNN" arxiv. It primarly d

Jan 23, 2022
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network.

May 26, 2022