Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".

nvdiffrec

Teaser image

Joint optimization of topology, materials and lighting from multi-view image observations as described in the paper Extracting Triangular 3D Models, Materials, and Lighting From Images.

For differentiable marching tetrahedons, we have adapted code from NVIDIA's Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research.

Licenses

Copyright © 2022, NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License.

For business inquiries, please contact [email protected]

Installation

Requires Python 3.6+, VS2019+, Cuda 11.3+ and PyTorch 1.10+

Tested in Anaconda3 with Python 3.9 and PyTorch 1.10

One time setup (Windows)

Install the Cuda toolkit (required to build the PyTorch extensions). We support Cuda 11.3 and above. Pick the appropriate version of PyTorch compatible with the installed Cuda toolkit. Below is an example with Cuda 11.3

conda create -n dmodel python=3.9
activate dmodel
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
pip install ninja imageio PyOpenGL glfw xatlas gdown
pip install git+https://github.com/NVlabs/nvdiffrast/
pip install --global-option="--no-networks" git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
imageio_download_bin freeimage

Every new command prompt

activate dmodel

Examples

Our approach is designed for high-end NVIDIA GPUs with large amounts of memory. To run on mid-range GPU's, reduce the batch size parameter in the .json files.

Simple genus 1 reconstruction example:

python train.py --config configs/bob.json

Visualize training progress (only supported on Windows):

python train.py --config configs/bob.json --display-interval 20

Multi GPU example (Linux only. Experimental: all results in the paper were generated using a single GPU), using PyTorch DDP

torchrun --nproc_per_node=4 train.py --config configs/bob.json

Below, we show the starting point and the final result. References to the right.

Initial guess Our result

The results will be stored in the out folder. The Spot and Bob models were created and released into the public domain by Keenan Crane.

Included examples

  • spot.json - Extracting a 3D model of the spot model. Geometry, materials, and lighting from image observations.
  • spot_fixlight.json - Same as above but assuming known environment lighting.
  • spot_metal.json - Example of joint learning of materials and high frequency environment lighting to showcase split-sum.
  • bob.json - Simple example of a genus 1 model.

Datasets

We additionally include configs (nerf_*.json, nerd_*.json) to reproduce the main results of the paper. We rely on third party datasets, which are courtesy of their respective authors. Please note that individual licenses apply to each dataset. To automatically download and pre-process all datasets, run the download_datasets.py script:

activate dmodel
cd data
python download_datasets.py

Below follows more information and instructions on how to manually install the datasets (in case the automated script fails).

NeRF synthetic dataset Our view interpolation results use the synthetic dataset from the original NeRF paper. To manually install it, download the NeRF synthetic dataset archive and unzip it into the nvdiffrec/data folder. This is required for running any of the nerf_*.json configs.

NeRD dataset We use datasets from the NeRD paper, which features real-world photogrammetry and inaccurate (manually annotated) segmentation masks. Clone the NeRD datasets using git and rescale them to 512 x 512 pixels resolution using the script scale_images.py. This is required for running any of the nerd_*.json configs.

activate dmodel
cd nvdiffrec/data/nerd
git clone https://github.com/vork/ethiopianHead.git
git clone https://github.com/vork/moldGoldCape.git
python scale_images.py

Server usage (through Docker)

  • Build docker image.
cd docker
./make_image.sh nvdiffrec:v1
  • Start an interactive docker container: docker run --gpus device=0 -it --rm -v /raid:/raid -it nvdiffrec:v1 bash

  • Detached docker: docker run --gpus device=1 -d -v /raid:/raid -w=[path to the code] nvdiffrec:v1 python train.py --config configs/bob.json

Owner
NVIDIA Research Projects
NVIDIA Research Projects
Comments
  • Custom dataset config options

    Custom dataset config options

    image

    I've been able to run this on my own data (slowly, on a 3070) using U^2Net matting and colmap2nerf code from instant-ngp (if others are following this, remove the image extensions, and it should run on windows) The issue I'm having is val, and test data or the lack of it. Are there config options to remove the requirement for those datasets. Copying and renaming the json lists to val and train works, but is rather cumbersome, and I was wondering if I was missing a better option with the NeRD dataset preparation, which use manual masks not applied to the image itself (which is possible with U^2 net, and would help with colmap mapping since the background can still be used for feature tracking.)

    I haven't looked into what data is required to be presented in the config, nor resolution/training options, but I'm just wondering what's the generally intended arguments and method for presenting your own data, when no val is available.

  • noise in rendered results (noise in diffuse/specular?)

    noise in rendered results (noise in diffuse/specular?)

    Thank you for the amazing work!!

    I did a quick trial on the custom in-the-wild data and found that there are some noise in fitting results. Did I mess up something? Have you observed this phonenmon before? Note that I fixed the vertex for mesh and just optimized the lighting / material.

    | rendered | reference | |---|---| | test_000002_opt | test_000002_ref | | test_000007_opt | test_000007_ref |

    The texture maps looks like: | kd | ks | normal | |---|---|---| | texture_kd | texture_ks | texture_n |

    Looking forward to your great help! Thanks a lot

  • How to get a model with higer accuracy and more details?

    How to get a model with higer accuracy and more details?

    Hello,

    I have run "python train.py --config configs/nerf_lego.json" with default setting. The picture below shows my mesh output.

    1. The smooth surface is not flat (as shown in the red box). How should I improve it?
    2. Lots of details are loss due to the geometric simplification. How can I get a model with more details?

    图片

    Thank you in advance!

  • How to show mesh with textures?

    How to show mesh with textures?

    Thank you for the amazing work! Now, I have trained the Nerf datasets of chair and I get a great result. I want to show the mesh with texture in MeshLab, but the chair mesh has nontexture in MeshLab. So what method should I use? 1

  • About how to generate NERF validate database and if it have some rule to make test data as validation.

    About how to generate NERF validate database and if it have some rule to make test data as validation.

    Hello, Now, i want to make my training data to generate 3d model, but i saw it have three kinds of database(train/test/val), and the val database maybe not used(???), so who can help share the way to how to generate validate_database(corresponding to the data folder test) and what rule to make it as TEST data? because as what i understand the test database should be the accurate data from the implementation. In addition, i see this kind of file(r_0_depth_0000.png) are not used inside implementation, right?

     What i know:
     a. i know the train database(data folder 'data') and **transforms_train.json** can generate by **COLMAP,** but don't know what rule to generate TEST data.
    
    
    So,  someone can share the information about my issue please!
    

    Regards

  • Mesh surface roughness obtained using nerd data

    Mesh surface roughness obtained using nerd data

    Thank you for your amazing work. I made a data set in nerd format. After training, the surface of mesh is not very good. What parameters should I modify? 20220415150739

    My config is like this: { "ref_mesh": "data/nerd/shoe", "random_textures": true, "iter": 20000, "save_interval": 100, "texture_res": [ 2048, 2048 ], "train_res": [960, 540], "batch": 4, "learning_rate": [0.03, 0.03], "kd_min" : [0.03, 0.03, 0.03], "kd_max" : [0.8, 0.8, 0.8], "ks_min" : [0, 0.08, 0], "ks_max" : [0, 1.0, 1.0], "dmtet_grid" : 128, "mesh_scale" : 3, "camera_space_light" : true, "background" : "white", "display" : [{"bsdf":"kd"}, {"bsdf":"ks"}, {"bsdf" : "normal"}], "out_dir": "shoe" }

  • Models turn to swiss cheese after >5000 iters

    Models turn to swiss cheese after >5000 iters

    img_mesh_pass_000050 img_mesh_pass_000060 What loss is supposed to control regularity? I've been focusing on this relatively simple model to see what sort of values works well in training for handheld video, and at least the lefthand outer image looks like it fits the images until 5k iters (1 batchsize) into training, the actual model looks very irregular, and finally collapses near the end of training tests.

    image the colmap (both exhaustive and sequential) tracking look like they map accurately, the paper describes a loss to solve this issue, but I don't know if the total loss must be reduced due to batchsize, or the loss for model regularity must be increased (which I dont know the config line for). 30% of the dataset images have been manually removed due to motion blur and being too far off the edges, which have helped in early training, and slight reduction in early collapses.

    I have included the dataset and the key iters that start collapsing (0-1000 iters, 4000-6000 iters) and the final models in the zip below (updated with more compressed iter images): https://files.catbox.moe/3v79c6.zip

    Is the training scheme running through the dataset sequentially, and therefore the final iters failing due to the images at the end? Both passes seem to fail at near end of the session. If so, then randomizing the images (if captured from video) would spread the bad frames out, instead of destroying the model at the end of training.

  • build tiny-cuda-nn error

    build tiny-cuda-nn error

    Hello there, I have a problem compiling the tiny-cuda-nn. I compiled it on a win10 system with RTX3090, CUDA11.3 and VS2019. Somehow I just cannot compile the tiny-cuda-nn and I got too many errors that seem to come from VS. Can you help me out? image

  • MLP texture does not support for DLMesh

    MLP texture does not support for DLMesh

    Thank you for the amazing work!!

    I tried to enable the mlp texture for DLM - changing this line https://github.com/NVlabs/nvdiffrec/blob/main/train.py#L632 to

    mat = initial_guess_material(geometry, True, FLAGS)
    

    However, when I run the the job, it produces the following error:

    iter=    0, img_loss=0.020718, reg_loss=0.000000, lr=0.00010, time=606.2 ms, rem=10.10 m
    Traceback (most recent call last):
      File "/home/wangjk/programs/nvdiffrec/train.py", line 778, in <module>
        geometry, mat = optimize_mesh(glctx, geometry, mat, lgt, dataset_train, dataset_validate, FLAGS, pass_idx=0, pass_name="mesh_pass", 
      File "/home/wangjk/programs/nvdiffrec/train.py", line 542, in optimize_mesh
        total_loss.backward()
      File "/home/wangjk/anaconda3/envs/torch-ngp/lib/python3.9/site-packages/torch/_tensor.py", line 307, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/wangjk/anaconda3/envs/torch-ngp/lib/python3.9/site-packages/torch/autograd/__init__.py", line 154, in backward
        Variable._execution_engine.run_backward(
    RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
    

    Do you have any ideas about this? Really appreciate it ;)

  • Does the GPU shared memory can be used inside this feature?

    Does the GPU shared memory can be used inside this feature?

    Hello, Now i run this featuren under windows11, and i would like to see that the shared memory of the GPU can relex the memory issue, but, still it failed to run for the higher resolution images.

    So i would like to know if the shared memory can be used inside this feature? (of course, maybe it needs tiny-cuda-nn to solve???).

    Image resolution: 3648x3648. GPU: RTX 3090 24g

    { "ref_mesh": "data/nerf_synthetic/xxxx", "random_textures": true, "iter": 5000, "save_interval": 100, "texture_res": [ 1024, 1024 ], "train_res": [3648, 3648], "batch": 1, "learning_rate": [0.03, 0.01], "ks_min" : [0, 0.1, 0.0], "dmtet_grid" : 64, "mesh_scale" : 1.5, "laplace_scale" : 3000, "display": [{"latlong" : true}, {"bsdf" : "kd"}, {"bsdf" : "ks"}, {"bsdf" : "normal"}], "layers" : 4, "background" : "white", "out_dir": "nerf_xxxx" }

    Loading extension module renderutils_plugin... Traceback (most recent call last): File "D:\zhansheng\proj\windows\nvdiffrec\train.py", line 594, in geometry, mat = optimize_mesh(glctx, geometry, mat, lgt, dataset_train, dataset_validate, File "D:\zhansheng\proj\windows\nvdiffrec\train.py", line 415, in optimize_mesh img_loss, reg_loss = trainer(target, it) File "C:\Users\jinshui\anaconda3\envs\dmodel\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "D:\zhansheng\proj\windows\nvdiffrec\train.py", line 299, in forward return self.geometry.tick(glctx, target, self.light, self.material, self.image_loss_fn, it) File "D:\zhansheng\proj\windows\nvdiffrec\geometry\dmtet.py", line 218, in tick buffers = self.render(glctx, target, lgt, opt_material) File "D:\zhansheng\proj\windows\nvdiffrec\geometry\dmtet.py", line 209, in render return render.render_mesh(glctx, opt_mesh, target['mvp'], target['campos'], lgt, target['resolution'], spp=target['spp'], File "D:\zhansheng\proj\windows\nvdiffrec\render\render.py", line 231, in render_mesh layers += [(render_layer(rast, db, mesh, view_pos, lgt, resolution, spp, msaa, bsdf), rast)] File "D:\zhansheng\proj\windows\nvdiffrec\render\render.py", line 166, in render_layer buffers = shade(gb_pos, gb_geometric_normal, gb_normal, gb_tangent, gb_texc, gb_texc_deriv, File "D:\zhansheng\proj\windows\nvdiffrec\render\render.py", line 46, in shade all_tex = material['kd_ks_normal'].sample(gb_pos) File "D:\zhansheng\proj\windows\nvdiffrec\render\mlptexture.py", line 90, in sample p_enc = self.encoder(_texc.contiguous()) File "C:\Users\jinshui\anaconda3\envs\dmodel\lib\site-packages\torch\nn\modules\module.py", line 1128, in _call_impl result = forward_call(*input, **kwargs) File "C:\Users\jinshui\anaconda3\envs\dmodel\lib\site-packages\tinycudann\modules.py", line 119, in forward output = _module_function.apply( File "C:\Users\jinshui\anaconda3\envs\dmodel\lib\site-packages\tinycudann\modules.py", line 31, in forward native_ctx, output = native_tcnn_module.fwd(input, params) RuntimeError: C:/Users/jinshui/AppData/Local/Temp/pip-req-build-ii64hvij/include\tiny-cuda-nn/gpu_memory.h:558 cuMemSetAccess(m_base_address + m_size, n_bytes_to_allocate, &access_desc, 1) failed with error CUDA_ERROR_OUT_OF_MEMORY Could not free memory: C:/Users/jinshui/AppData/Local/Temp/pip-req-build-ii64hvij/include\tiny-cuda-nn/gpu_memory.h:462 cuMemAddressFree(m_base_address, m_max_size) failed with error CUDA_ERROR_INVALID_VALUE

    Regards

  • Implementations of MLPTexture3D

    Implementations of MLPTexture3D

    Hello,

    Thanks for releasing the code for this amazing work!

    I've been going through the code and find some implementations in TextureMLP3D confusing:

    1. What's the meaning of scaling the gradient? I set gradient_scaling=1.0 and did not find much difference in the output.
    2. Why not use the tcnn MLP but only the tcnn Encoding?

    Thanks!

  • error in one time setup

    error in one time setup

    when I run: pip install --global-option="--no-networks" git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch i get

        ERROR: Command errored out with exit status 1:
         command: 'C:\Users\nunob\anaconda3\envs\dmodel\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\nunob\\AppData\\Local\\Temp\\pip-req-build-ajsps1vv\\bindings/torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\nunob\\AppData\\Local\\Temp\\pip-req-build-ajsps1vv\\bindings/torch\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\nunob\AppData\Local\Temp\pip-pip-egg-info-gm8vy1dn'
             cwd: C:\Users\nunob\AppData\Local\Temp\pip-req-build-ajsps1vv\bindings/torch
        Complete output (7 lines):
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "C:\Users\nunob\AppData\Local\Temp\pip-req-build-ajsps1vv\bindings/torch\setup.py", line 3, in <module>
            import torch
          File "C:\Users\nunob\anaconda3\envs\dmodel\lib\site-packages\torch\__init__.py", line 126, in <module>
            raise err
        OSError: [WinError 182] The operating system cannot run %1. Error loading "C:\Users\nunob\anaconda3\envs\dmodel\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
    
  • Render the final result?

    Render the final result?

    Hello. First of all, thank you very much for your excellent work. I'm sorry I'm not very familiar with graphics engines, but now I want to render the final output, such as hotdog's Mesh. There are some questions below hope to get your answer: 1.According to the description in the article, kd.png\ks.png\kn.png describes the material of the model,But the final output has.mtL file, is.mtL equivalent to three PNG files?Now when I use mesh.load_mesh(mesh.obj), the api will directly use mesh.mtl instead of loading three Png files 2. If.mtL means the same as three Png files, I can render by loading the model and loading Png files?

  • Is there any way to use datasets from instant ngp?

    Is there any way to use datasets from instant ngp?

    Hi, I am trying to train nvfdiffrec straigth from instant-ngp NERF datasets but I'm blocked. The instant-ngp dataset uses images without alpha and it seems like nvdiffrec needs images with alpha. Also, nvdiffrec uses a train/test/val dataset, is there a way to use the same input as instant-ngp uses?

    Thanks!

  • CO3D dataset

    CO3D dataset

    Hi. Thanks for sharing the code. Great work!

    I am trying to train the model with objects from the CO3D dataset (link). However, I am getting pretty bad results and i suspect that my camera matrices are incorrect. I would really appriciate some help if possible.

    Here is how i create the matrices (i used this article as a reference to create the projection matrix) :

    
    def get_camera(self, annotations):
            image_size = annotations["image"]["size"]
            viewpoint_frame = annotations["viewpoint_frame"]
            R = viewpoint_frame['R']
            T = viewpoint_frame['T']
            focal_length = viewpoint_frame['focal_length']
            principal_point = torch.tensor(viewpoint_frame['principal_point'])
    
            mv = torch.tensor([
                [R[0][0],R[0][1],R[0][2], T[0]],
                [R[1][0],R[1][1],R[1][2], T[1]],
                [R[2][0],R[2][1],R[2][2], T[2]],
                [    0 ,     0,      0,    1  ],
            ], dtype=torch.float32)
    
            # Transform mv from Pytorch3D to OpenGL coordinates 
            rotate_y = util.rotate_y(math.pi)
            mv = rotate_y @ mv
    
            # Convert principal_point and focal_length from NDC space to pixel space
            half_image_size = torch.tensor(list(reversed(image_size))) / 2.0
            principal_point_px = (-1.0 * (principal_point-1) * half_image_size)
            focal_length_px =  torch.tensor(focal_length) * half_image_size
    
            A = 2*focal_length_px[0] / image_size[1] #2*f/w
            B = 2*focal_length_px[1] / image_size[0] #2*f/h
            C = (image_size[1] - 2*principal_point_px[0]) / image_size[1] #(w – 2*cx)/w
            D = (image_size[0] - 2*principal_point_px[1]) / image_size[0] #(h – 2*cy)/h
            n=0.1
            f=1000.0
    
            proj = torch.tensor([
                [A, 0, -C, 0],
                [0, -B, D, 0],
                [0, 0.0, (-f - n) / (f - n), -2.0*f*n/(f-n)],
                [0, 0, -1, 0],
            ])
    
            campos = torch.linalg.inv(mv)[:3, 3]
            mvp    = proj @ mv
    
    

    Result:

    img_dmtet_pass1_000010

(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"
(CVPR 2022 Oral) Official implementation for

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

May 20, 2022
Code for "Neural 3D Scene Reconstruction with the Manhattan-world Assumption" CVPR 2022 Oral
Code for

News 05/10/2022 To make the comparison on ScanNet easier, we provide all quantitative and qualitative results of baselines here, including COLMAP, COL

May 19, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Mar 21, 2022
[CVPR 2022 Oral] Rethinking Minimal Sufficient Representation in Contrastive Learning

Rethinking Minimal Sufficient Representation in Contrastive Learning PyTorch implementation of Rethinking Minimal Sufficient Representation in Contras

May 6, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

May 20, 2022
[CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers

TubeDETR: Spatio-Temporal Video Grounding with Transformers Website • STVG Demo • Paper This repository provides the code for our paper. This includes

May 19, 2022
[CVPR 2022 Oral] Crafting Better Contrastive Views for Siamese Representation Learning
[CVPR 2022 Oral] Crafting Better Contrastive Views for Siamese Representation Learning

Crafting Better Contrastive Views for Siamese Representation Learning (CVPR 2022 Oral) 2022-03-29: The paper was selected as a CVPR 2022 Oral paper! 2

May 18, 2022
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)
Official code for

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Apr 22, 2022
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"
[CVPR 2022] Official code for the paper:

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

Apr 8, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)
Official code of the paper

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

May 20, 2022
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"
Code for CVPR 2021 oral paper

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

May 3, 2022
This is the code for CVPR 2021 oral paper: Jigsaw Clustering for Unsupervised Visual Representation Learning

JigsawClustering Jigsaw Clustering for Unsupervised Visual Representation Learning Pengguang Chen, Shu Liu, Jiaya Jia Introduction This project provid

May 10, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

May 14, 2022
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

May 18, 2022
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)

Convolutional Hough Matching Networks This is the implementation of the paper "Convolutional Hough Matching Network" by J. Min and M. Cho. Implemented

May 7, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Jan 27, 2022
Official Implementation of CVPR 2022 paper: "Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning"

(CVPR 2022) Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning ArXiv This repo contains Official Implementat

May 3, 2022
Official PyTorch implementation of the paper "Deep Constrained Least Squares for Blind Image Super-Resolution", CVPR 2022.
Official PyTorch implementation of the paper

Deep Constrained Least Squares for Blind Image Super-Resolution [Paper] This is the official implementation of 'Deep Constrained Least Squares for Bli

May 21, 2022
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)
Official repository for the paper

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

May 18, 2022