Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

Paper | Website | Data

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose
Shih-Yang Su, Frank Yu, Michael Zollhรถfer, and Helge Rhodin
Thirty-Fifth Conference on Neural Information Processing Systems (NeurIPS 2021)

Setup

Setup environment

conda create -n anerf python=3.8
conda activate anerf

# install pytorch for your corresponding CUDA environments
pip install torch

# install pytorch3d: note that doing `pip install pytorch3d` directly may install an older version with bugs.
# be sure that you specify the version that matches your CUDA environment. See: https://github.com/facebookresearch/pytorch3d
pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu102_pyt190/download.html

# install other dependencies
pip install -r requirements.txt

Download pre-processed data and pre-trained models

We provide pre-processed data in .h5 format, as well as pre-trained characters for SURREAL and Mixamo dataset.

Please see data/README.md for details.

Testing

You can use run_render.py to render the learned models under different camera motions, or retarget the character to different poses by

python run_render.py --nerf_args logs/surreal_model/args.txt --ckptpath logs/surreal_model/150000.tar \
                     --dataset surreal --entry hard --render_type bullet --render_res 512 512 \
                     --white_bkgd --runname surreal_bullet

Here,

  • --dataset specifies the data source for poses,
  • --entry specifices the particular subset from the dataset to render,
  • --render_type defines the camera motion to use, and
  • --render_res specifies the height and width of the rendered images.

Therefore, the above command will render 512x512 the learned SURREAL character with bullet-time effect like the following (resizsed to 256x256):

The output can be found in render_output/surreal_bullet/.

You can also extract mesh for the learned character:

python run_render.py --nerf_args logs/surreal_model/args.txt --ckptpath logs/surreal_model/150000.tar \
                     --dataset surreal --entry hard --render_type mesh --runname surreal_mesh

You can find the extracted .ply files in render_output/surreal_mesh/meshes/.

To render the mesh as in the paper, run

python render_mesh.py --expname surreal_mesh 

which will output the rendered images in render_output/surreal_mesh/mesh_render/ like the following:

You can change the setting in run_render.py to create your own rendering configuration.

Training

We provide template training configurations in configs/ for different settings.

To train A-NeRF on our pre-processed SURREAL dataset,

python run_nerf.py --config configs/surreal/surreal.txt --basedir logs  --expname surreal_model

The trained weights and log can be found in logs/surreal_model.

To train A-NeRF on our pre-processed Mixamo dataset with estimated poses, run

python run_nerf.py --config configs/mixamo/mixamo.txt --basedir log_mixamo/ --num_workers 8 --subject archer --expname mixamo_archer

This will train A-NeRF on Mixamo Archer with pose refinement for 500k iterations, with 8 worker threads for the dataloader.

You can also add --use_temp_loss --temp_coef 0.05 to optimize the pose with temporal constraint.

Additionally, you can specify --opt_pose_stop 200000 to stop the pose refinement at 200k iteraions to only optimize the body models for the remaining iterations.

To finetune the learned model, run

python run_nerf.py --config configs/mixamo/mixamo_finetune.txt --finetune --ft_path log_mixamo/mixamo_archer/500000.tar --expname mixamo_archer_finetune

This will finetune the learned Mixamo Archer for 200k with the already refined poses. Note that the pose will not be updated during this time.

Citation

@inproceedings{su2021anerf,
    title={A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose},
    author={Su, Shih-Yang and Yu, Frank and Zollh{\"o}fer, Michael and Rhodin, Helge},
    booktitle = {Advances in Neural Information Processing Systems},
    year={2021}
}

Acknowledgements

Owner
Shih-Yang Su
Enjoy working on ML/RL/CV/MIR related domain.
Shih-Yang Su
Comments
  • Question about extend_scales.

    Question about extend_scales.

    Hello, when I made my own video dataset, I found that there is a parameter 'extend_scale' used to zoom in and out the estimated smpl model and align it to the 'rest_pose' you gave. When i draw the 2D keypoint (calculated by the aligned kp3d and c2w) on the picture, there are some deviations. I am a little confused about this.

    Can I directly use SPIN estimated result?

    Is there anything to pay attention to in the selection of this parameter?

  • Problems when training A-NeRF in zju_mocap dataset

    Problems when training A-NeRF in zju_mocap dataset

    image When I trained A-NeRF in zju_mocap not changing dataloader in this repo, it seems that there will be some artifacts on the floor when visualization. Do you meet the problem when you training on zju_mocap? or whether I did something wrong when training?

  • Missing logs folder

    Missing logs folder

    Hi, @LemonATsu, congratulates on your great work. When I tried to test the pretrained model, I met a little problem that logs folder cannot be found. Maybe this can be further specified in Readme.

  • Does the codebase support multi-view training as done in Table A6 of Appendix?

    Does the codebase support multi-view training as done in Table A6 of Appendix?

    Hello,

    First of all, thank you for releasing the code for your amazing research. I loved the fact that this paper tried from the get-go to make as few assumptions as possible and refrain from using too many human-specific priors.

    My question is does the codebase support multi-view training i.e., training with multi-view video as done in Table A6 of the Appendix? I've found a flag in run_nerf.py that seems to refer to the use of multi-view video but it appears as though this flag isn't actually used during training of the NeRF: (https://github.com/LemonATsu/A-NeRF/blob/1b07b7aa2a0bd8d2f6bf13632b86760b4aa44e04/run_nerf.py#L447)

    If the codebase doesn't yet support multi-view training, could you provide some advice as to how to modify the codebase to enable multi-view? For instance, how did you incorporate multi-view into the codebase when performing the experiments for Table A6?

    Thank you in advance :)

  • "Segmentation fault" problem

    Hi, author, I encountered a problem when I run the training code: It stopped at run_nerf.py (line 539) while it is spawning multi-processing something. Snipaste_2021-12-28_00-06-11

    the screen shot is following:

    $ python run_nerf.py --config configs/surreal/surreal.txt --basedir logs  --expname surreal_model
    init meta
    parent-centered
    Loader initialized.
    KPE: RelDist, BPE: VecNorm, VPE: VecNorm
    Embedder class: <class 'core.cutoff_embedder.CutoffEmbedder'>
    Normalization: False opt_cut :False
    Embedder class: <class 'core.cutoff_embedder.Embedder'>
    Embedder class: <class 'core.cutoff_embedder.CutoffEmbedder'>
    Normalization: False opt_cut :False
    RayCaster(
      (network): NeRF(
        (pts_linears): ModuleList(
          (0): Linear(in_features=432, out_features=256, bias=True)
          (1): Linear(in_features=256, out_features=256, bias=True)
          (2): Linear(in_features=256, out_features=256, bias=True)
          (3): Linear(in_features=256, out_features=256, bias=True)
          (4): Linear(in_features=256, out_features=256, bias=True)
          (5): Linear(in_features=688, out_features=256, bias=True)
          (6): Linear(in_features=256, out_features=256, bias=True)
          (7): Linear(in_features=256, out_features=256, bias=True)
        )
        (alpha_linear): Linear(in_features=256, out_features=1, bias=True)
        (views_linears): ModuleList(
          (0): Linear(in_features=904, out_features=128, bias=True)
        )
        (feature_linear): Linear(in_features=256, out_features=256, bias=True)
        (rgb_linear): Linear(in_features=128, out_features=3, bias=True)
      )
      (network_fine): NeRF(
        (pts_linears): ModuleList(
          (0): Linear(in_features=432, out_features=256, bias=True)
          (1): Linear(in_features=256, out_features=256, bias=True)
          (2): Linear(in_features=256, out_features=256, bias=True)
          (3): Linear(in_features=256, out_features=256, bias=True)
          (4): Linear(in_features=256, out_features=256, bias=True)
          (5): Linear(in_features=688, out_features=256, bias=True)
          (6): Linear(in_features=256, out_features=256, bias=True)
          (7): Linear(in_features=256, out_features=256, bias=True)
        )
        (alpha_linear): Linear(in_features=256, out_features=1, bias=True)
        (views_linears): ModuleList(
          (0): Linear(in_features=904, out_features=128, bias=True)
        )
        (feature_linear): Linear(in_features=256, out_features=256, bias=True)
        (rgb_linear): Linear(in_features=128, out_features=3, bias=True)
      )
      (embed_fn): CutoffEmbedder()
      (embedbones_fn): Embedder()
      (embeddirs_fn): CutoffEmbedder()
    )
    Found ckpts []
    #parameters: 864260
    done creating popt
    Segmentation fault (core dumped)
    
     /home/chenhe/anaconda3/envs/anerf/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 56 leaked semaphore objects to clean up at shutdown
      warnings.warn('resource_tracker: There appear to be %d '
    
    

    my environment is 4x TITAN XP GPU (12GB). python3.8 + pytorch 1.7.1 + cuda 10.1 + cuDNN 7.6.5 I guess that it may the train data's resolution is too high (1000*1000), could you give the way how to compress the train data? How can I reduce the training size by changing the args' parameters? or another solution.

  • Cannot extract mesh for the learned character nor render mesh as described in README.md

    Cannot extract mesh for the learned character nor render mesh as described in README.md

    https://github.com/LemonATsu/A-NeRF/blob/fe553717052cc2696714177566641cdaaba0459a/run_render.py#L973

    I've successfully ran the first test with run_render.py to render the learned model. However, running the second test (rendering the mesh) produces an error referencing that import line. ModuleNotFoundError: No module named 'mcubes'.

    My environment is setup precisely as described in the README.md. I tried manually installing the supposedly missing package but that results in a python version conflict because A-NeRF requires 3.8: Specifications: - marching_cubes -> python[version='>=3.7,<3.8.0a0|>=3.7.6,<3.8.0a0']

  • how to use the pre-rained model for run_render.py

    how to use the pre-rained model for run_render.py

    Hi,

    When I run the code run_render.py, I don't know how to use the config file in function def config_parser(). The following is what I set, but there is always an error.

        parser.add_argument('--config',` is_config_file=True, default='configs/surreal/surreal.txt', 
    help='config file path')
        # nerf config
        parser.add_argument('--nerf_args', type=str, required=True, default='configs/surreal/surreal.txt',
                            help='path to nerf configuration (args.txt in log)')
        parser.add_argument('--ckptpath', type=str, required=True, default='model/surreal.tar',
                            help='path to ckpt')
        # render config
        parser.add_argument('--render_res', nargs='+', type=int, default=[512, 512],
                            help='tuple of resolution in (H, W) for rendering')
        parser.add_argument('--dataset', type=str, required=True, default='data/surreal/surreal_val_h5py.h5',
                            help='dataset to render')
        parser.add_argument('--entry', type=str, required=True, default='hard',
                            help='entry in the dataset catalog to render')
        parser.add_argument('--white_bkgd', action='store_true', default=True,
                            help='render with white background')
        parser.add_argument('--render_type', type=str, default='bullet',
                            help='type of rendering to conduct')
        parser.add_argument('--save_gt', action='store_true',
                            help='save gt frames')
        parser.add_argument('--fps', type=int, default=14,
                            help='fps for video')
        parser.add_argument('--mesh_res', type=int, default=255,
                            help='resolution for marching cubes')
        # kp-related
        parser.add_argument('--render_refined', action='store_true',
                            help='render from refined poses')
        parser.add_argument('--subject_idx', type=int, default=0,
                            help='which subject to render (for MINeRF)')
        # frame-related
        parser.add_argument('--selected_idxs', nargs='+', type=int, default=None,
                            help='hand-picked idxs for rendering')
        parser.add_argument('--selected_framecode', type=int, default=None,
                            help='hand-picked framecode for rendering')
        # saving
        parser.add_argument('--outputdir', type=str, default='render_output/',
                            help='output directory')
        parser.add_argument('--runname', type=str, required=True, default='surreal_bullet',
                            help='run name as an identifier ')
        # evaluation
        parser.add_argument('--eval', action='store_true',
                            help='to do evaluation at the end or not (only in bounding box)')
        parser.add_argument('--no_save', action='store_true',
                            help='no image saving operation')
    

    The error is "run_render.py: error: argument --subject_idx: invalid int value: 'female'". Can you give me an example of using pre-trained model and data you post?

    Thank you, Letian

  • Docker Image

    Docker Image

    As a Windows user, it's been tough just trying to get the environment setup, i.e., issues with pytorch3D. Just wondering if there is a Docker environment available out there for A-NeRF?

  • Question about kp3d from smpl parameters?

    Question about kp3d from smpl parameters?

    Hi, I want to ask that if i have an original smpl parameters in which the global_orientation and translation is different from zju_mocap dataset,(it is just the rotation and translation of pelvis), how can i get the kp3ds and skts like get_smpls function in load_zju.py you write? I got confused about the transformation of it, I would appreciate it if you could give some advice, thanks!

  • th_ssim dimension error.

    th_ssim dimension error.

    Hi, this job is very wonderful. But i met some problems trying to train the network, in evaluation_helpers.py line 317 test_ssim = th_ssim.permute(0, 2, 3, 1).cpu().numpy(). The error is: number of dims don't match in permute. the th_ssim i got is not a 4-D tensor, is torch.Size([15]).

  • which SMPL model

    which SMPL model

    Thank you for releasing the source code. However, I am confused about the SMPL model. Which SMPL model is used and what is the full name of the SMPL model? What is the folder structure of data/smpl?

  • Cannot change subject in config file.

    Cannot change subject in config file.

    https://github.com/LemonATsu/A-NeRF/blob/fe553717052cc2696714177566641cdaaba0459a/configs/surreal/surreal.txt#L7

    Changing the subject config value to "male", for example, when running the default Test of run_render for SURREAL yields a KeyError. Shouldn't this input setting be acceptable given that the dataset has two inputs (0: 'female', 1: 'male') for 'Gender'?

  • Problem with smaller N_rand.

    Problem with smaller N_rand.

    Hi! Thanks for releasing the code.

    I am trying to train on single GPU and change N_randfrom 3072 to 2048. Other parameters and environment are set up as recommended. Then I came across this error:

    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93:** operator(): block: [0,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [36,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [37,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [38,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [39,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [24,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [25,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [26,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [27,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [28,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [29,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [0,0,0], thread: [31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
      3%|████▎                                                                                                                                        | 15000/490001 [1:36:36<50:59:00,  2.59it/s]
    Traceback (most recent call last):
      File "run_nerf.py", line 625, in <module>
        train()
      File "run_nerf.py", line 559, in train
        kp_val, bone_val, skt_val, _, _ = popt_layer(render_data["kp_idxs"])
      File "/home/crh/anaconda3/envs/anerf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/crh/code/nerf/A-NeRF/core/pose_opt.py", line 313, in forward
        return self.calculate_kinematic(idxs, rest_pose_idxs)
      File "/home/crh/code/nerf/A-NeRF/core/pose_opt.py", line 387, in calculate_kinematic
        pelvis, bone = self.idx_to_params(unique_idxs)
      File "/home/crh/code/nerf/A-NeRF/core/pose_opt.py", line 326, in idx_to_params
        return pelvis, bones[idx]
    RuntimeError: CUDA error: device-side assert triggered
    CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    

    I have tried to retrain for several times. This error always happens exactly at the 25000th iteration. Any idea?

Related tags
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Jul 30, 2022
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jul 31, 2022
A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

NeRF Minimal Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Result of Tiny-NeRF RGB Depth

Jul 24, 2022
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Aug 6, 2022
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF implementation. Contact Jon Barron if you encounter any issues.

Aug 1, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

Aug 8, 2022
D-NeRF: Neural Radiance Fields for Dynamic Scenes
 D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Jul 29, 2022
Code release for NeRF (Neural Radiance Fields)
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

Aug 9, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Aug 4, 2022
Point-NeRF: Point-based Neural Radiance Fields
Point-NeRF: Point-based Neural Radiance Fields

Point-NeRF: Point-based Neural Radiance Fields Project Sites | Paper | Primary c

Aug 2, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"
Implementation of

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

Jul 29, 2022
Instant-nerf-pytorch - NeRF trained SUPER FAST in pytorch

instant-nerf-pytorch This is WORK IN PROGRESS, please feel free to contribute vi

Jul 25, 2022
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields
PyTorch implementation for  MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Jul 31, 2022
A PyTorch re-implementation of Neural Radiance Fields
A PyTorch re-implementation of Neural Radiance Fields

nerf-pytorch A PyTorch re-implementation Project | Video | Paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Ben Mildenhall

Jul 31, 2022
This is a JAX implementation of Neural Radiance Fields for learning purposes.

learn-nerf This is a JAX implementation of Neural Radiance Fields for learning purposes. I've been curious about NeRF and its follow-up work for a whi

Aug 6, 2022
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.
Code for

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Jul 26, 2022
A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)
A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021) This repository contains the official implemen

Jul 15, 2022
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Jul 14, 2022
Neural Radiance Fields Using PyTorch
Neural Radiance Fields Using PyTorch

This project is a PyTorch implementation of Neural Radiance Fields (NeRF) for reproduction of results whilst running at a faster speed.

Feb 11, 2022