This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields

This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

This codebase contains a re-implementation of Nerfies using JAX, building on JaxNeRF. We have been careful to match implementation details and have reproduced the original results presented in the paper.

Demo

We provide an easy-to-get-started demo using Google Colab!

These Colabs will allow you to train a basic version of our method using Cloud TPUs (or GPUs) on Google Colab.

Note that due to limited compute resources available, these are not the fully featured models. If you would like to train a fully featured Nerfie, please refer to the instructions below on how to train on your own machine.

Description Link
Process a video into a Nerfie dataset Open In Colab
Train a Nerfie Open In Colab
Render a Nerfie video Open In Colab

Setup

The code can be run under any environment with Python 3.7 and above. (It may run with lower versions, but we have not tested it).

We recommend using Miniconda and setting up an environment:

conda create --name nerfies python=3.8

Next, install the required packages:

pip install -r requirements.txt

Install the appropriate JAX distribution for your environment by following the instructions here. For example:

# For CUDA version 11.0
pip install --upgrade jax jaxlib==0.1.57+cuda110 -f https://storage.googleapis.com/jax-releases/jax_releases.html

# For CUDA version 10.1
pip install --upgrade jax jaxlib==0.1.57+cuda101 -f https://storage.googleapis.com/jax-releases/jax_releases.html

Training

After preparing a dataset, you can train a Nerfie by running:

export DATASET_PATH=/path/to/dataset
export EXPERIMENT_PATH=/path/to/save/experiment/to
python train.py \
    --data_dir $DATASET_PATH \
    --exp_dir $EXPERIMENT_PATH \
    --gin_configs configs/test_vrig.gin

To plot telemetry to Tensorboard and render checkpoints on the fly, also launch an evaluation job by running:

python eval.py \
    --data_dir $DATASET_PATH \
    --exp_dir $EXPERIMENT_PATH \
    --gin_configs configs/test_vrig.gin

The two jobs should use a mutually exclusive set of GPUs. This division allows the training job to run without having to stop for evaluation.

Configuration

  • We use Gin for configuration.
  • We provide a couple preset configurations.
  • Please refer to config.py for documentation on what each configuration does.
  • Preset configs:
    • gpu_vrig_paper.gin: This is the configuration we used to generate the table in the paper. It requires 8 GPUs for training.
    • gpu_fullhd.gin: This is a high-resolution model and will take around 3 days to train on 8 GPUs.
    • gpu_quarterhd.gin: This is a low-resolution model and will take around 14 hours to train on 8 GPUs.
    • test_local.gin: This is a test configuration to see if the code runs. It probably will not result in a good looking result.
    • test_vrig.gin: This is a test configuration to see if the code runs for validation rig captures. It probably will not result in a good looking result.
  • Training on fewer GPUs will require tuning of the batch size and learning rates. We've provided an example configuration for 4 GPUs in gpu_quarterhd_4gpu.gin but we have not tested it, so please only use it as a reference.

Datasets

A dataset is a directory with the following structure:

dataset
    ├── camera
    │   └── ${item_id}.json
    ├── camera-paths
    ├── rgb
    │   ├── ${scale}x
    │   └── └── ${item_id}.png
    ├── metadata.json
    ├── points.npy
    ├── dataset.json
    └── scene.json

At a high level, a dataset is simply the following:

  • A collection of images (e.g., from a video).
  • Camera parameters for each image.

We have a unique identifier for each image which we call item_id, and this is used to match the camera and images. An item_id can be any string, but typically it is some alphanumeric string such as 000054.

camera

  • This directory contains cameras corresponding to each image.
  • We use a camera model identical to the OpenCV camera model, which is also supported by COLMAP.
  • Each camera is a serialized version of the Camera class defined in camera.py and looks like this:
{
  // A 3x3 world-to-camera rotation matrix representing the camera orientation.
  "orientation": [
    [0.9839, -0.0968, 0.1499],
    [-0.0350, -0.9284, -0.3699],
    [0.1749, 0.358, -0.9168]
  ],
  // The 3D position of the camera in world-space.
  "position": [-0.3236, -3.26428, 5.4160],
  // The focal length of the camera.
  "focal_length": 2691,
  // The principle point [u_0, v_0] of the camera.
  "principal_point": [1220, 1652],
  // The skew of the camera.
  "skew": 0.0,
  // The aspect ratio for the camera pixels.
  "pixel_aspect_ratio": 1.0,
  // Parameters for the radial distortion of the camera.
  "radial_distortion": [0.1004, -0.2090, 0.0],
  // Parameters for the tangential distortion of the camera.
  "tangential": [0.001109, -2.5733e-05],
  // The image width and height in pixels.
  "image_size": [2448, 3264]
}

camera-paths

  • This directory contains test-time camera paths which can be used to render videos.
  • Each sub-directory in this path should contain a sequence of JSON files.
  • The naming scheme does not matter, but the cameras will be sorted by their filenames.

rgb

  • This directory contains images at various scales.
  • Each subdirectory should be named ${scale}x where ${scale} is an integer scaling factor. For example, 1x would contain the original images while 4x would contain images a quarter of the size.
  • We assume the images are in PNG format.
  • It is important the scaled images are integer factors of the original to allow the use of area relation when scaling the images to prevent Moiré. A simple way to do this is to simply trim the borders of the image to be divisible by the maximum scale factor you want.

metadata.json

  • This defines the 'metadata' IDs used for embedding lookups.
  • Contains a dictionary of the following format:
{
    "${item_id}": {
        // The embedding ID used to fetch the deformation latent code
        // passed to the deformation field.
        "warp_id": 0,
        // The embedding ID used to fetch the appearance latent code
        // which is passed to the second branch of the template NeRF.
        "appearance_id": 0,
        // For validation rig datasets, we use the camera ID instead
        // of the appearance ID. For example, this would be '0' for the
        // left camera and '1' for the right camera. This can potentially
        // also be used for multi-view setups as well.
        "camera_id": 0
    },
    ...
},

scene.json

  • Contains information about how we will parse the scene.
  • See comments inline.
{
  // The scale factor we will apply to the pointcloud and cameras. This is
  // important since it controls what scale is used when computing the positional
  // encoding.
  "scale": 0.0387243672920458,
  // Defines the origin of the scene. The scene will be translated such that
  // this point becomes the origin. Defined in unscaled coordinates.
  "center": [
    1.1770838526103944e-08,
    -2.58235339289195,
    -1.29117656263135
  ],
  // The distance of the near plane from the camera center in scaled coordinates.
  "near": 0.02057418950149491,
  // The distance of the far plane from the camera center in scaled coordinates.
  "far": 0.8261601717667288
}

dataset.json

  • Defines the training/validation split of the dataset.
  • See inline comments:
{
  // The total number of images in the dataset.
  "count": 114,
  // The total number of training images (exemplars) in the dataset.
  "num_exemplars": 57,
  // A list containins all item IDs in the dataset.
  "ids": [...],
  // A list containing all training item IDs in the dataset.
  "train_ids": [...],
  // A list containing all validation item IDs in the dataset.
  // This should be mutually exclusive with `train_ids`.
  "val_ids": [...],
}

points.npy

  • A numpy file containing a single array of size (N,3) containing the background points.
  • This is required if you want to use the background regularization loss.

Citing

If you find our work useful, please consider citing:

@article{park2020nerfies
  author    = {Park, Keunhong 
               and Sinha, Utkarsh 
               and Barron, Jonathan T. 
               and Bouaziz, Sofien 
               and Goldman, Dan B 
               and Seitz, Steven M. 
               and Martin-Brualla, Ricardo},
  title     = {Deformable Neural Radiance Fields},
  journal   = {arXiv preprint arXiv:2011.12948},
  year      = {2020},
}
Owner
Google
Google ❤️ Open Source
Google
Comments
  • Nerfies_Training.ipynb collab fails on import configs,  having changed no settings

    Nerfies_Training.ipynb collab fails on import configs, having changed no settings

    I ran both collabs in sequence Nerfies_Capture_Processing.ipynb and Nerfies_Training.ipynb on the process of running the second one - having changed nothing igot an error thanks


    ValueError Traceback (most recent call last) in () 6 from IPython.display import display, Markdown 7 ----> 8 from nerfies import configs 9 10

    5 frames /usr/lib/python3.7/dataclasses.py in _get_field(cls, a_name, a_type) 731 # For real fields, disallow mutable defaults for known types. 732 if f._field_type is _FIELD and isinstance(f.default, (list, dict, set)): --> 733 raise ValueError(f'mutable default {type(f.default)} for field ' 734 f'{f.name} is not allowed: use default_factory') 735

  • Questions about face processing section in Colab

    Questions about face processing section in Colab

    Hi, thanks for releasing the codes.

    In Colab, process a video into a Nerfie dataset, it seems that the face processing section is optional. I skipped this section and ran compute scene information section. It gives errors, "name 'new_scene_manager' is not defined' ". 'new_scene_manager' is defined in face processing section. Is it necessary to run the face processing section or did I miss anything?

    WeChat Image_20210204220139

    Thank you!

  • how to use on video that does not contain a face?

    how to use on video that does not contain a face?

    Hi, Currently the pre processing notebook for dataset generation is limited to being used with videos that contain faces despite this being marked as an "optional" step.

    the mediapipe library is used to get a mesh of the face and parts of this are used to generate the scene.json as well as the test camera path.

    Do you have a more generic workflow or simplified method for generating the scene.json?

  • Flag --base_folder must have a value other than None.

    Flag --base_folder must have a value other than None.

    I encountered this issue when I ran python train.py --data_dir $DATASET_PATH --base_folder $EXPERIMENT_PATH --gin_configs configs/test_vrig.gin. Below is the error log. What can I try to solve this? Thanks!

    Traceback (most recent call last):
      File "train.py", line 54, in <module>
        jax.config.parse_flags_with_absl()
      File "/home/jllantero/miniconda3/envs/nerfies/lib/python3.8/site-packages/jax/_src/config.py", line 161, in parse_flags_with_absl
        absl.flags.FLAGS(jax_argv, known_only=True)
      File "/home/jllantero/miniconda3/envs/nerfies/lib/python3.8/site-packages/absl/flags/_flagvalues.py", line 673, in __call__
        self.validate_all_flags()
      File "/home/jllantero/miniconda3/envs/nerfies/lib/python3.8/site-packages/absl/flags/_flagvalues.py", line 533, in validate_all_flags
        self._assert_validators(all_validators)
      File "/home/jllantero/miniconda3/envs/nerfies/lib/python3.8/site-packages/absl/flags/_flagvalues.py", line 568, in _assert_validators
        raise _exceptions.IllegalFlagValueError('\n'.join(messages))
    absl.flags._exceptions.IllegalFlagValueError: flag --base_folder=None: Flag --base_folder must have a value other than None.
    
  • Rendering results are strange

    Rendering results are strange

    In training process, predicted RGB of validation dataset is fine. 2021-05-07 10-03-44 的屏幕截图 But when rendering with the latest checkpoint, I got the strange results as follows: 2021-05-07 10-06-18 的屏幕截图

  • Error with Nerfies configuration in

    Error with Nerfies configuration in "Nerfies Render Video.ipynb".

    Hi, I have been using your Google Colab notebooks for several days. They were working fine but yesterday I tried to use "Nerfies Render Video.ipynb" and I got this error:

    image

    I have tried changing the "train_dir:" and "data_dir:", but I still get the same error. Any ideas on how to solve this problem?

  • JAX error: RuntimeError: optional has no value

    JAX error: RuntimeError: optional has no value

    When Parse data in https://github.com/google/nerfies/blob/main/notebooks/Nerfies_Capture_Processing.ipynb

    executing following cell:

    if colmap_image_scale > 1:
      print(f'Scaling COLMAP cameras back to 1x from {colmap_image_scale}x.')
      for item_id in scene_manager.image_ids:
        camera = scene_manager.camera_dict[item_id]
        scene_manager.camera_dict[item_id] = camera.scale(colmap_image_scale)
    

    Run into error below, seems to be a JAX error, how should we address this? Thanks

    Scaling COLMAP cameras back to 1x from 4x.
    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-24-32db5f4d2f95> in <module>
         11   for item_id in scene_manager.image_ids:
         12     camera = scene_manager.camera_dict[item_id]
    ---> 13     scene_manager.camera_dict[item_id] = camera.scale(colmap_image_scale)
         14 
         15 
    
    /usr/local/lib/python3.8/dist-packages/nerfies/camera.py in scale(self, scale)
        315         radial_distortion=self.radial_distortion.copy(),
        316         tangential_distortion=self.tangential_distortion.copy(),
    --> 317         image_size=jnp.array((int(round(self.image_size[0] * scale)),
        318                               int(round(self.image_size[1] * scale)))),
        319     )
    
    /usr/local/lib/python3.8/dist-packages/jax/_src/numpy/lax_numpy.py in array(object, dtype, copy, order, ndmin)
       2903     _inferred_dtype = object.dtype and dtypes.canonicalize_dtype(object.dtype)
       2904     lax._check_user_dtype_supported(_inferred_dtype, "array")
    -> 2905     out = _device_put_raw(object, weak_type=weak_type)
       2906     if dtype: assert _dtype(out) == dtype
       2907   elif isinstance(object, (DeviceArray, core.Tracer)):
    
    /usr/local/lib/python3.8/dist-packages/jax/_src/lax/lax.py in _device_put_raw(x, weak_type)
       1493   else:
       1494     aval = raise_to_shaped(core.get_aval(x), weak_type=weak_type)
    -> 1495     return xla.array_result_handler(None, aval)(*xla.device_put(x))
       1496 
       1497 def iota(dtype: DType, size: int) -> Array:
    
    /usr/local/lib/python3.8/dist-packages/jax/interpreters/xla.py in make_device_array(aval, device, device_buffer)
       1032   if (isinstance(device_buffer, _CppDeviceArray)):
       1033 
    -> 1034     if device_buffer.aval == aval and device_buffer._device == device:
       1035       return device_buffer
       1036     device_buffer = device_buffer.clone()
    
    RuntimeError: optional has no value
    
  • TypeError in rendering video frames.

    TypeError in rendering video frames.

    Hi, Thanks a lot for releasing the code! I tried running the Colab demo, but got errors (as shown in figure) when rendering a video both in training and rendering notebook. How to solve this problem? (Is it related to the WARNING at the top of the figure? And how to set the 'base_64' param mentioned in the warning?) fig

  • How to get points.npy

    How to get points.npy

    I noticed that points.npy is included in the dataset for background regularization. But it can't be created in the demo of "Process a video into a Nerfie dataset". So how can I get it?

  • can not import flax in self hostet jupyter notebook

    can not import flax in self hostet jupyter notebook

    Hi, thanks to anyone involved publishing this. I managed to get the video extraction notebook running after installing heaps of packages missing from the 'requirements.txt'.

    Now I'm stuck in the second notebook, trying to import flax. Running the following in the same cell yields:

    !python -c "import flax; print(flax)"
    import flax

    <module 'flax' from '/opt/conda/envs/nerfies/lib/python3.8/site-packages/flax/init.py'>

    ModuleNotFoundError Traceback (most recent call last) in 1 get_ipython().system('python -c "import flax; print(flax)"') ----> 2 import flax ModuleNotFoundError: No module named 'flax'

    Does anyone reading this have a clue what's up? Cheers, Magnus

  • Unable to run on GPU

    Unable to run on GPU

    I am trying to run Nerfies on GPU it always "RuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16504591536 bytes."

    I tried it locally and on colab (selected GPU runtime and GPU in code)

    Both places same error.

    It happens at cell "Train a Nerfie!" on line "with time_tracker.record_time('train_step'): state, stats, keys = ptrain_step(keys, state, batch, scalar_params) time_tracker.toc('total')"

    Any solution to this??

  • AttributeError: module 'jaxlib.pocketfft' has no attribute 'pocketfft'

    AttributeError: module 'jaxlib.pocketfft' has no attribute 'pocketfft'

    Traceback (most recent call last): File "D:/VS/python/nerfies-main/train.py", line 23, in from flax import jax_utils File "D:\RJ\A\python3.8\lib\site-packages\flax_init_.py", line 36, in from . import core File "D:\RJ\A\python3.8\lib\site-packages\flax\core_init_.py", line 15, in from .axes_scan import broadcast File "D:\RJ\A\python3.8\lib\site-packages\flax\core\axes_scan.py", line 17, in import jax File "D:\RJ\A\python3.8\lib\site-packages\jax_init_.py", line 109, in from .experimental.maps import soft_pmap File "D:\RJ\A\python3.8\lib\site-packages\jax\experimental\maps.py", line 25, in from .. import numpy as jnp File "D:\RJ\A\python3.8\lib\site-packages\jax\numpy_init_.py", line 16, in from . import fft File "D:\RJ\A\python3.8\lib\site-packages\jax\numpy\fft.py", line 17, in from jax.src.numpy.fft import ( File "D:\RJ\A\python3.8\lib\site-packages\jax_src\numpy\fft.py", line 19, in from jax import lax File "D:\RJ\A\python3.8\lib\site-packages\jax\lax_init.py", line 332, in from jax._src.lax.fft import ( File "D:\RJ\A\python3.8\lib\site-packages\jax_src\lax\fft.py", line 145, in xla.backend_specific_translations['cpu'][fft_p] = pocketfft.pocketfft AttributeError: module 'jaxlib.pocketfft' has no attribute 'pocketfft'

  • mask and mask-colmap

    mask and mask-colmap

    Thanks a lot for sharing the code! I noticed that there are mask and mask colmap folders in the dataset you shared, but I could not generate these two folders by using the "Nerfies Capture Processing v2. ipynb" you provided, which made it impossible for me to carry out my own training in the next step. Could you please tell me how to generate the contents of these two folders? Thank you very much. image image

  • V2

    V2

    Hi developers,

    I find there are still a lot of issues caused by the code bugs on the [email protected] These issues has been opened on the issue sections inclduing #58 #59 #53 #44 .

    The main correct on the bugs of code is as listed as below:

    1. Under coding file "configs.py", at line 19 "from flax import nn" should be corrected to "from flax import linen as nn"
    2. Under coding file "model_utils.py", at line 107 " jnp.broadcast_to([last_sample_z], z_vals[..., :1].shape)" should be corrected to " jnp.broadcast_to(jnp.array([last_sample_z]), z_vals[..., :1].shape)"

    After fixing these bugs of the codes, nerfies can run the with the prepaired custmised image data.

    Larry

  • Could we have the original videos for the dataset?

    Could we have the original videos for the dataset?

    Hello, Could we have the original videos for the dataset? I wanna run through the 'Process a video into a Nerfie dataset' colab notebook with the original video.

  • Error while running colab v2  Typeerror  stack trace included in model_utils

    Error while running colab v2 Typeerror stack trace included in model_utils

    Including the stacktrace for the type

    TypeError Traceback (most recent call last)

    /usr/local/lib/python3.7/dist-packages/nerfies/models.py in construct_nerf(key, config, batch_size, appearance_ids, camera_ids, warp_ids, near, far, use_warp_jacobian, use_weights) 485 }, 486 init_rays_dict, --> 487 warp_extra=warp_extra)['params'] 488 489 return model, params

    /usr/local/lib/python3.7/dist-packages/nerfies/models.py in call(self, rays_dict, warp_extra, metadata_encoded, use_warp, return_points, return_weights, return_warp_jacobian, deterministic) 346 metadata_encoded=metadata_encoded, 347 return_points=return_points, --> 348 return_weights=True) 349 out = {'coarse': coarse_ret} 350

    /usr/local/lib/python3.7/dist-packages/nerfies/models.py in render_samples(self, level, points, z_vals, directions, viewdirs, metadata, warp_extra, use_warp, use_warp_jacobian, metadata_encoded, return_points, return_weights) 283 return_weights=return_weights, 284 use_white_background=self.use_white_background, --> 285 sample_at_infinity=self.use_sample_at_infinity)) 286 287 return out

    /usr/local/lib/python3.7/dist-packages/nerfies/model_utils.py in volumetric_rendering(rgb, sigma, z_vals, dirs, use_white_background, sample_at_infinity, return_weights, eps) 105 dists = jnp.concatenate([ 106 z_vals[..., 1:] - z_vals[..., :-1], --> 107 jnp.broadcast_to([last_sample_z], z_vals[..., :1].shape) 108 ], -1) 109 dists = dists * jnp.linalg.norm(dirs[..., None, :], axis=-1)

    /usr/local/lib/python3.7/dist-packages/jax/_src/numpy/util.py in _broadcast_to(arr, shape)

    /usr/local/lib/python3.7/dist-packages/jax/_src/numpy/util.py in _check_arraylike(fun_name, *args)

    TypeError: broadcast_to requires ndarray or scalar arguments, got <class 'list'> at position 0.

  • error while trying to run colab notebook: Nerfies Training v2.ipynb

    error while trying to run colab notebook: Nerfies Training v2.ipynb

    HI I was trying to run the colab notebook Nerfies Training v2.ipynb after having run the Nerfies Capture Processing v2.ipynb on a video.

    I got stuck on an error that I don't understand due to my lack of knowledge. maybe someone can help. This is the error the notebook produced

    UnfilteredStackTrace Traceback (most recent call last) in () 34 use_warp_jacobian=train_config.use_elastic_loss, ---> 35 use_weights=train_config.use_elastic_loss) 36

    26 frames UnfilteredStackTrace: TypeError: broadcast_to requires ndarray or scalar arguments, got <class 'list'> at position 0.

    The stack trace below excludes JAX-internal frames. The preceding is the original exception that occurred, unmodified.

    The above exception was the direct cause of the following exception:

    TypeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/jax/_src/numpy/util.py in _check_arraylike(fun_name, *args) 293 if not _arraylike(arg)) 294 msg = "{} requires ndarray or scalar arguments, got {} at position {}." --> 295 raise TypeError(msg.format(fun_name, type(arg), pos)) 296 297

    TypeError: broadcast_to requires ndarray or scalar arguments, got <class 'list'> at position 0.

    Thanks in advanced

Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Sep 22, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Sep 16, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

Sep 25, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",
This repository contains the source code for the paper

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Sep 23, 2022
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Sep 19, 2022
Code release for NeRF (Neural Radiance Fields)
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

Sep 21, 2022
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Sep 21, 2022
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Sep 20, 2022
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF implementation. Contact Jon Barron if you encounter any issues.

Sep 26, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".
This repository contains a PyTorch implementation of

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

Sep 23, 2022
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields
PyTorch implementation for  MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Sep 16, 2022
BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

BARF ?? : Bundle-Adjusting Neural Radiance Fields Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey IEEE International Conference on Comp

Sep 17, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

Sep 22, 2022
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo

NerfingMVS Project Page | Paper | Video | Data NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo Yi Wei, Shaohui

Sep 27, 2022
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Sep 22, 2022
D-NeRF: Neural Radiance Fields for Dynamic Scenes
 D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Sep 7, 2022
pixelNeRF: Neural Radiance Fields from One or Few Images
 pixelNeRF: Neural Radiance Fields from One or Few Images

pixelNeRF: Neural Radiance Fields from One or Few Images Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa UC Berkeley arXiv: http://arxiv.org/abs/2

Sep 18, 2022
A PyTorch re-implementation of Neural Radiance Fields
A PyTorch re-implementation of Neural Radiance Fields

nerf-pytorch A PyTorch re-implementation Project | Video | Paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Ben Mildenhall

Sep 17, 2022
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction Project Page | Paper | Supplementary | Video This reposit

Sep 16, 2022