A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch

NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are some videos generated by this repository (pre-trained models are provided below):

This project is a faithful PyTorch implementation of NeRF that reproduces the results while running 1.3 times faster. The code is based on authors' Tensorflow implementation here, and has been tested to match it numerically.

Installation

git clone https://github.com/yenchenlin/nerf-pytorch.git
cd nerf-pytorch
pip install -r requirements.txt
Dependencies (click to expand)

Dependencies

  • PyTorch 1.4
  • matplotlib
  • numpy
  • imageio
  • imageio-ffmpeg
  • configargparse

The LLFF data loader requires ImageMagick.

You will also need the LLFF code (and COLMAP) set up to compute poses if you want to run on your own real data.

How To Run?

Quick Start

Download data for two example datasets: lego and fern

bash download_example_data.sh

To train a low-res lego NeRF:

python run_nerf.py --config configs/lego.txt

After training for 100k iterations (~4 hours on a single 2080 Ti), you can find the following video at logs/lego_test/lego_test_spiral_100000_rgb.mp4.


To train a low-res fern NeRF:

python run_nerf.py --config configs/fern.txt

After training for 200k iterations (~8 hours on a single 2080 Ti), you can find the following video at logs/fern_test/fern_test_spiral_200000_rgb.mp4 and logs/fern_test/fern_test_spiral_200000_disp.mp4


More Datasets

To play with other scenes presented in the paper, download the data here. Place the downloaded dataset according to the following directory structure:

├── configs                                                                                                       
│   ├── ...                                                                                     
│                                                                                               
├── data                                                                                                                                                                                                       
│   ├── nerf_llff_data                                                                                                  
│   │   └── fern                                                                                                                             
│   │   └── flower  # downloaded llff dataset                                                                                  
│   │   └── horns   # downloaded llff dataset
|   |   └── ...
|   ├── nerf_synthetic
|   |   └── lego
|   |   └── ship    # downloaded synthetic dataset
|   |   └── ...

To train NeRF on different datasets:

python run_nerf.py --config configs/{DATASET}.txt

replace {DATASET} with trex | horns | flower | fortress | lego | etc.


To test NeRF trained on different datasets:

python run_nerf.py --config configs/{DATASET}.txt --render_only

replace {DATASET} with trex | horns | flower | fortress | lego | etc.

Pre-trained Models

You can download the pre-trained models here. Place the downloaded directory in ./logs in order to test it later. See the following directory structure for an example:

├── logs 
│   ├── fern_test
│   ├── flower_test  # downloaded logs
│   ├── trex_test    # downloaded logs

Reproducibility

Tests that ensure the results of all functions and training loop match the official implentation are contained in a different branch reproduce. One can check it out and run the tests:

git checkout reproduce
py.test

Method

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Ben Mildenhall*1, Pratul P. Srinivasan*1, Matthew Tancik*1, Jonathan T. Barron2, Ravi Ramamoorthi3, Ren Ng1
1UC Berkeley, 2Google Research, 3UC San Diego
*denotes equal contribution

A neural radiance field is a simple fully connected network (weights are ~5MB) trained to reproduce input views of a single scene using a rendering loss. The network directly maps from spatial location and viewing direction (5D input) to color and opacity (4D output), acting as the "volume" so we can use volume rendering to differentiably render new views

Citation

Kudos to the authors for their amazing results:

@misc{mildenhall2020nerf,
    title={NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis},
    author={Ben Mildenhall and Pratul P. Srinivasan and Matthew Tancik and Jonathan T. Barron and Ravi Ramamoorthi and Ren Ng},
    year={2020},
    eprint={2003.08934},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

However, if you find this implementation or pre-trained models helpful, please consider to cite:

@misc{lin2020nerfpytorch,
  title={NeRF-pytorch},
  author={Yen-Chen, Lin},
  howpublished={\url{https://github.com/yenchenlin/nerf-pytorch/}},
  year={2020}
}
Owner
Yen-Chen Lin
PhD student at MIT CSAIL
Yen-Chen Lin
Comments
  • replace searchsorted with in-built function

    replace searchsorted with in-built function

    Recent versions of pytorch add searchsorted; I had issues compiling your project with the versions of GCC and NVCC I have on my system. Looks like you were thinking along similar lines with your todo statement.

    Because I wasn't able to compile the original torchsearchsorted module I haven't been able to check if the results are identical. However training on the low-res modules is converging and PSNR is going up in a reasonable way.

  • Seems not converge on Lego

    Seems not converge on Lego

    Hi, thanks for your great PyTorch implementation! After running on the lego data, I found the loss is around 0.13 and the output video and test frames are all white or black.

    010

    Hope for your help! Thank you!

  • Does your code work on NERF synthetic dataset ?

    Does your code work on NERF synthetic dataset ?

    Hi, I have noticed that the original NERF paper have trained and tesed their methods on synthetic dataset (cars,drum,...) so I wonder if your code works on those datasets ?

  • Abnormal Render Result.

    Abnormal Render Result.

    Hi, thank you for your excellent work! There is a problem. When I run python run_nerf.py --config configs\fern.txt --render_only, the render result I got was weird. All intermediate images I got from ./fern_test/renderonly_path_200000 look like this: 000 I have no idea about why this happens and how to fix it.

  • device-side assert triggered

    device-side assert triggered

    Traceback (most recent call last): File "run_nerf.py", line 858, in train() File "run_nerf.py", line 742, in train **render_kwargs_train) File "run_nerf.py", line 126, in render all_ret = batchify_rays(rays, chunk, **kwargs) File "run_nerf.py", line 59, in batchify_rays ret = render_rays(rays_flat[i:i+chunk], **kwargs) File "run_nerf.py", line 401, in render_rays raw = network_query_fn(pts, viewdirs, run_fn) File "run_nerf.py", line 204, in netchunk=args.netchunk) File "run_nerf.py", line 49, in run_network outputs_flat = batchify(fn, netchunk)(embedded) File "run_nerf.py", line 33, in ret return torch.cat([fn(inputs[i:i+chunk]) for i in range(0, inputs.shape[0], chunk)], 0) File "run_nerf.py", line 33, in return torch.cat([fn(inputs[i:i+chunk]) for i in range(0, inputs.shape[0], chunk)], 0) File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ovopark/nerf-pytorch-master/run_nerf_helpers.py", line 104, in forward h = F.relu(h) File "/usr/local/lib/python3.6/site-packages/torch/nn/functional.py", line 1119, in relu result = torch.relu(input) RuntimeError: CUDA error: device-side assert triggered ➜ nerf-pytorch-master

  • Can't run a pre-trained model

    Can't run a pre-trained model

    Hi! Thanks for the PyTorch implementation, looks great!

    I tried to run pre-trained model:

    python3 run_nerf.py --config configs/lego.txt --render_only
    

    but it seems load_blender.py assumes there should be another file data/nerf_synthetic/lego/transforms_train.json, but it's not in the repo nor it comes with pre-trained models:

    Traceback (most recent call last):
      File "run_nerf.py", line 860, in <module>
        train()
      File "run_nerf.py", line 570, in train
        images, poses, render_poses, hwf, i_split = load_blender_data(args.datadir, args.half_res, args.testskip)
      File "[PATH]/nerf-pytorch/load_blender.py", line 41, in load_blender_data
        with open(os.path.join(basedir, 'transforms_{}.json'.format(s)), 'r') as fp:
    FileNotFoundError: [Errno 2] No such file or directory: './data/nerf_synthetic/lego/transforms_train.json'
    

    Am I missing something?

    Thanks!

  • No output video after 200k iterations on the lego dataset.

    No output video after 200k iterations on the lego dataset.

    I have followed the instructions and run the "python run_nerf.py --config configs/lego.txt" command. After 200k iterations ( the instruction claims 100k iterations), no video is output at the "logs/lego_test/lego_test_spiral_100000_rgb.mp4".

    The process ended with: image It seems that the process exited correctly.

    Where should I do to acquire the rendered video, or the output results in other formats?

  • Deeper Network for views

    Deeper Network for views

    Have you experimented with your commented out code here? Did it help/harm performance?

    https://github.com/yenchenlin/nerf-pytorch/blob/master/run_nerf_helpers.py#L90

    Seems like the paper and the official TF implementation do different things.

  • Unstable training result on Lego scene

    Unstable training result on Lego scene

    Thanks for your work!

    When I use your original code to train on Lego scene with same setting repeatedly, I get two different rendering results showing below: PSNR30_000 PSNR32_000 Obviously, the result above is vaguer than the one below, then I check out the training log, I find the train PSNR above(mostly less than 30) is lower than the one below(mostly more than 30).

    Then, I repeatedly run several experiments on the Lego scene, I find the initial Loss if Ier: 100 is unstable, leading to an unstable rendering result.

    Do you have any idea about that?

    P.S.: This problem is not presented on fern scene.

  • assert triggered!

    assert triggered!

    python run_nerf.py --config configs/fern.txt --render_only Loaded image data (378, 504, 3, 20) [378. 504. 407.56579161] Loaded ./data/nerf_llff_data/fern 16.985296178676084 80.00209740336334 recentered (3, 5) [[ 1.0000000e+00 0.0000000e+00 0.0000000e+00 1.4901161e-09] [ 0.0000000e+00 1.0000000e+00 -1.8730975e-09 -9.6857544e-09] [-0.0000000e+00 1.8730975e-09 1.0000000e+00 0.0000000e+00]] Data: (20, 3, 5) (20, 378, 504, 3) (20, 2) HOLDOUT view is 12 Loaded llff (20, 378, 504, 3) (120, 3, 5) [378. 504. 407.5658] ./data/nerf_llff_data/fern Auto LLFF holdout, 8 DEFINING BOUNDS NEAR FAR 0.0 1.0 Found ckpts ['./logs/fern_test/200000.tar'] Reloading from ./logs/fern_test/200000.tar RENDER ONLY test poses shape torch.Size([120, 3, 5]) 0%| | 0/120 [00:00<?, ?it/s]0 0.003157377243041992 0%| | 0/120 [00:00<?, ?it/s] Traceback (most recent call last): File "run_nerf.py", line 858, in train() File "run_nerf.py", line 650, in train rgbs, _ = render_path(render_poses, hwf, args.chunk, render_kwargs_test, gt_imgs=images, savedir=testsavedir, render_factor=args.render_factor) File "run_nerf.py", line 154, in render_path rgb, disp, acc, _ = render(H, W, focal, chunk=chunk, c2w=c2w[:3,:4], **render_kwargs) File "run_nerf.py", line 126, in render all_ret = batchify_rays(rays, chunk, **kwargs) File "run_nerf.py", line 59, in batchify_rays ret = render_rays(rays_flat[i:i+chunk], **kwargs) File "run_nerf.py", line 393, in render_rays z_samples = sample_pdf(z_vals_mid, weights[...,1:-1], N_importance, det=(perturb==0.), pytest=pytest) File "/home/zhuxiangyang/work/nerf-pytorch/run_nerf_helpers.py", line 227, in sample_pdf below = torch.max(torch.zeros_like(inds-1), inds-1) RuntimeError: CUDA error: invalid device function Segmentation fault (core dumped)

  • Issue with training low-res lego NeRF

    Issue with training low-res lego NeRF

    Hi, I followed the instructions to train the low-res lego NeRF. But at iteration 100,000 I realized the disp_map video saved was an invalid file. Upon investigating, the disp_map array being saved contained np.nan values.(Also, I switched the video writing from using imageio library to cv2 library.)

    This was traced to line 387 in run_nerf.py. raw = network_query_fn(pts, viewdirs, network_fn) Some rows of raw[...,3] , representing density, were negative.

    Then at line 277, under the function raw2outputs(line 264): raw2alpha = lambda raw, dists, act_fn=F.relu: 1.-torch.exp(-act_fn(raw)*dists) evaluates to zero, due to ReLU on negative value. This leads to the np.nan values appearing in disp_map within the same function block.

    So I would like to ask, Am I doing something wrong here? Or is this an expected behaviour? Please kindly advise

  • config of vasedeck

    config of vasedeck

    Hi, thanks for your great work. Is there a config.txt file that provides vasedeck data?

    I used horn's config to train vasedeck and the effect is not very good. image

  • Normalize rays_d

    Normalize rays_d

    Hi, thank you for sharing a good codebase.

    In https://github.com/yenchenlin/nerf-pytorch/blob/a15fd7cb363e93f933012fd1f1ad5395302f63a4/run_nerf_helpers.py#L159 you are using an unnormalized "rays_d"

    Shouldn't this be normalized so that it indicates a direction vector?

    Thanks in advance.

  • the issue about using your own data

    the issue about using your own data

    I meet some question when I use my own datasets. When I put the poses_bounds.npy from the imgs2poses into the nerf, the results of rendering are very bad, and the video is like a cloud of fog.

  • Extracting geometry from a NeRF

    Extracting geometry from a NeRF

    In the tensorflow implementation, it has a function that the geometry of the scene can be extracted as cubes.

    https://github.com/bmild/nerf https://github.com/bmild/nerf/blob/master/extract_mesh.ipynb

    Is it also possible in this pytorch project?

Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Aug 3, 2022
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jul 31, 2022
A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

NeRF Minimal Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Result of Tiny-NeRF RGB Depth

Jul 24, 2022
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Aug 6, 2022
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF implementation. Contact Jon Barron if you encounter any issues.

Aug 1, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

Aug 8, 2022
D-NeRF: Neural Radiance Fields for Dynamic Scenes
 D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Jul 29, 2022
Code release for NeRF (Neural Radiance Fields)
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

Jul 30, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Aug 4, 2022
Point-NeRF: Point-based Neural Radiance Fields
Point-NeRF: Point-based Neural Radiance Fields

Point-NeRF: Point-based Neural Radiance Fields Project Sites | Paper | Primary c

Aug 2, 2022
This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"

Fisher Information Loss This repository contains code that can be used to reproduce the experimental results presented in the paper: Awni Hannun, Chua

Jul 26, 2022
Instant-nerf-pytorch - NeRF trained SUPER FAST in pytorch

instant-nerf-pytorch This is WORK IN PROGRESS, please feel free to contribute vi

Jul 25, 2022
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields
PyTorch implementation for  MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Jul 31, 2022
A PyTorch re-implementation of Neural Radiance Fields
A PyTorch re-implementation of Neural Radiance Fields

nerf-pytorch A PyTorch re-implementation Project | Video | Paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Ben Mildenhall

Jul 31, 2022
Neural Radiance Fields Using PyTorch
Neural Radiance Fields Using PyTorch

This project is a PyTorch implementation of Neural Radiance Fields (NeRF) for reproduction of results whilst running at a faster speed.

Feb 11, 2022
SatelliteNeRF - PyTorch-based Neural Radiance Fields adapted to satellite domain
SatelliteNeRF - PyTorch-based Neural Radiance Fields adapted to satellite domain

SatelliteNeRF PyTorch-based Neural Radiance Fields adapted to satellite domain.

Jul 20, 2022
This is a JAX implementation of Neural Radiance Fields for learning purposes.

learn-nerf This is a JAX implementation of Neural Radiance Fields for learning purposes. I've been curious about NeRF and its follow-up work for a whi

Aug 6, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"
Implementation of

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

Jul 29, 2022
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Aug 5, 2022