ICON: Implicit Clothed humans Obtained from Normals (CVPR 2022)

ICON: Implicit Clothed humans Obtained from Normals

Yuliang XiuJinlong YangDimitrios TzionasMichael J. Black

CVPR 2022

Logo


PyTorch Lightning

Paper PDF Project Page Youtube Video Google Colab Discord Room



News 馃毄


Table of Contents
  1. Who needs ICON
  2. TODO
  3. Installation
  4. Dataset Preprocess
  5. Demo
  6. Citation
  7. Acknowledgments
  8. License
  9. Disclosure
  10. Contact


Who needs ICON?

  • Given an RGB image, you could get:
    • image (png): segmentation, normal images (body + cloth), overlap result (rgb + normal)
    • mesh (obj): SMPL-(X) body, reconstructed clothed human
    • video (mp4): self-rotated clothed human
Intermediate Results
ICON's intermediate results
Final ResultsFinal Results
ICON's normal prediction + reconstructed mesh (w/o & w/ smooth)
  • If you want to create a realistic and animatable 3D clothed avatar direclty from video / sequential images
    • fully-textured with per-vertex color
    • can be animated by SMPL pose parameters
    • natural pose-dependent clothing deformation
ICON+SCANimate+AIST++
3D Clothed Avatar, created from 400+ images using ICON+SCANimate, animated by AIST++

TODO

  • testing code and pretrained models (*self-implemented version)
    • ICON (w/ & w/o global encoder, w/ PyMAF/HybrIK/PIXIE/PARE as HPS)
    • PIFu* (RGB image + predicted normal map as input)
    • PaMIR* (RGB image + predicted normal map as input, w/ PyMAF/PARE as HPS)
  • colab notebook Google Colab
  • dataset processing pipeline
  • training and evaluation codes
  • Video-to-Avatar module

Installation

Please follow the Installation Instruction to setup all the required packages, extra data, and models.

Dataset Preprocess

Please follow the Data Preprocess Instruction to generate the train/val/test dataset from raw scans (THuman2.0).

Demo

cd ICON/apps

# PIFu* (*: re-implementation)
python infer.py -cfg ../configs/pifu.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# PaMIR* (*: re-implementation)
python infer.py -cfg ../configs/pamir.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# ICON w/ global filter (better visual details --> lower Normal Error))
python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results -hps_type {pixie/pymaf/pare/hybrik}

# ICON w/o global filter (higher evaluation scores --> lower P2S/Chamfer Error))
python infer.py -cfg ../configs/icon-nofilter.yaml -gpu 0 -in_dir ../examples -out_dir ../results -hps_type {pixie/pymaf/pare/hybrik}

More Qualitative Results

Comparison
Comparison with other state-of-the-art methods
extreme
Predicted normals on in-the-wild images with extreme poses


Citation

@inproceedings{xiu2022icon,
  title={{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
  author={Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
  booktitle={IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year={2022}
}

Acknowledgments

We thank Yao Feng, Soubhik Sanyal, Qianli Ma, Xu Chen, Hongwei Yi, Chun-Hao Paul Huang, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study, Taylor McConnell for her voice over, Benjamin Pellkofer for webpage, and Yuanlu Xu's help in comparing with ARCH and ARCH++.

Special thanks to Vassilis Choutas for sharing the code of bvh-distance-queries

Here are some great resources we benefit from:

Some images used in the qualitative examples come from pinterest.com.

This project has received funding from the European Union鈥檚 Horizon 2020 research and innovation programme under the Marie Sk艂odowska-Curie grant agreement No.860768 (CLIPE Project).

License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Disclosure

MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB was a part-time employee of Amazon during this project, his research was performed solely at, and funded solely by, the Max Planck Society.

Contact

For more questions, please contact [email protected]

For commercial licensing, please contact [email protected]

Owner
Yuliang Xiu
Ph.D. Student in Graphics & Vision, 3D Virtual Avatar Researcher, Play with pixels and voxels.
Yuliang Xiu
Comments
  • Trouble getting ICON results

    Trouble getting ICON results

    After installing all packages, I got the results successfully for PIFu and PaMIR. I faced the runtime error when trying to get the ICON demo result. Could you guide what setting was wrong?

    $ python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results
    
    Traceback (most recent call last):
      File "infer.py", line 304, in <module>
        verts_pr, faces_pr, _ = model.test_single(in_tensor)
      File "./ICON/apps/ICON.py", line 738, in test_single
        sdf = self.reconEngine(opt=self.cfg,
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "../lib/common/seg3d_lossless.py", line 148, in forward
        return self._forward_faster(**kwargs)
      File "../lib/common/seg3d_lossless.py", line 170, in _forward_faster
        occupancys = self.batch_eval(coords, **kwargs)
      File "../lib/common/seg3d_lossless.py", line 139, in batch_eval
        occupancys = self.query_func(**kwargs, points=coords2D)
      File "../lib/common/train_util.py", line 338, in query_func
        preds = netG.query(features=features,
      File "../lib/net/HGPIFuNet.py", line 285, in query
        smpl_sdf, smpl_norm, smpl_cmap, smpl_ind = cal_sdf_batch(
      File "../lib/dataset/mesh_util.py", line 231, in cal_sdf_batch
        residues, normals, pts_cmap, pts_ind = func(
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/mesh_distance.py", line 79, in forward
        output = self.search_tree(triangles, points)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 109, in forward
        output = BVHFunction.apply(
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 42, in forward
        outputs = bvh_distance_queries_cuda.distance_queries(
    RuntimeError: after reduction step 1: cudaErrorInvalidDevice: invalid device ordinal
    
  • ConnectionError: HTTPSConnectionPool

    ConnectionError: HTTPSConnectionPool

    requests.exceptions.ConnectionError: HTTPSConnectionPool(host='drive.google.com', port=443): Max retries exceeded with url: /uc?id=1tCU5MM1LhRgGou5OpmpjBQbSrYIUoYab (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbc4ad7deb0>: Failed to establish a new connection: [Errno 110] Connection timed out'))

  • Error: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    Error: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    Getting this error when installing locally on my workstation via colab bash script.

    .../ICON/pytorch3d/pytorch3d/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    This after installing pytorch3d locally as recommended. Conda has too many conflicts and never resolves.

    Installing torch through pip works (1.8.2+cu111) up until the next steps of infer.py because bvh_distance_queries only supports cuda 11.0. This would most likely require compiling against 11.0, but it will probably lead to more errors as I don't know what this repository's dependencies require as far as torch goes.

  • ModuleNotFoundError: No module named 'bvh_distance_queries_cuda'

    ModuleNotFoundError: No module named 'bvh_distance_queries_cuda'

    Hi, thank you so much for the wonderful work and corresponding codes. I am facing the following issue: https://github.com/YuliangXiu/ICON/blob/0045bd10f076bf367d25b7dac41d0d5887b8694f/lib/bvh-distance-queries/bvh_distance_queries/bvh_search_tree.py#L27

    Is there any .py file called bvh_distance_queries_cuda ? Please let me know a possible solution. Thank you for your effort and help :) :) :)

  • Some questions about training

    Some questions about training

    I would like to know some details about training. Is the Ground-Truth SMPL or the predicted SMPL used in training ICON? Also, what about normal images? According to my understanding of the paper and practice, ICON should train the normal network first and then train the implicit reconstruction network. When I reproduce ICON, I don't know whether to choose the Ground-Truth or the predicted data for SMPL model and normal images, respectively.

  • problem when use pixie as hps_type

    problem when use pixie as hps_type

    Sorry to open similar issue.

    It's an issue that came up before, but it's not solved well, so I'm asking you again #30 related to iss_30

    The problem has not been solved for about five days.

    It was changed to 5.1.1 according to the advice of the issue, but the problem remained the same and restored to the latest version of PyYAML. Is there any other solution?

    Sorry, try to use PyYAML==5.1.1

    Traceback (most recent call last):
      File "infer.py", line 96, in <module>
        dataset = TestDataset(dataset_param, device)
      File "/workspace/fashion-ICON/apps/../lib/dataset/TestDataset.py", line 105, in __init__
        self.hps = PIXIE(config = pixie_cfg, device=self.device)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 49, in __init__
        self._create_model()
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 115, in _create_model
        self.smplx = SMPLX(self.cfg.model).to(self.device)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/models/SMPLX.py", line 156, in __init__
        self.extra_joint_selector = JointsFromVerticesSelector(
      File "/workspace/fashion-ICON/apps/../lib/pixielib/models/lbs.py", line 399, in __init__
        data = yaml.load(f)
    TypeError: load() missing 1 required positional argument: 'Loader'
    

    by original repo's issue (https://github.com/YuliangXiu/ICON/issues/30)

    refer to upper comments, this error might be resolved by install PyYAML==5.1.1 but It makes error again

    Traceback (most recent call last):
      File "infer.py", line 102, in <module>
        for data in pbar:
      File "/opt/conda/envs/icon/lib/python3.8/site-packages/tqdm/std.py", line 1180, in __iter__
        for obj in iterable:
      File "/workspace/fashion-ICON/apps/../lib/dataset/TestDataset.py", line 191, in __getitem__
        preds_dict = self.hps.forward(img_hps.to(self.device))
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 56, in forward
        param_dict = self.encode({'body': {'image': data}}, threthold=True, keep_local=True, copy_and_paste=False)
      File "/opt/conda/envs/icon/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 259, in encode
        cropped_image, cropped_joints_dict = self.part_from_body(image_hd, part_name, points_dict)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 166, in part_from_body
        cropped_image, tform = self.Cropper[cropper_key].crop(
      File "/workspace/fashion-ICON/apps/../lib/pixielib/utils/tensor_cropper.py", line 98, in crop
        cropped_image, tform = crop_tensor(image, center, bbox_size, self.crop_size)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/utils/tensor_cropper.py", line 78, in crop_tensor
        cropped_image = warp_affine(
    TypeError: warp_affine() got an unexpected keyword argument 'flags'
    

    Are there any solution to run pixie module?

  • Bugs when infer on another gpu instead of gpu 0

    Bugs when infer on another gpu instead of gpu 0

    Thanks for your great work. I want to use another gpu to run the demo, so I modify https://github.com/YuliangXiu/ICON/blob/53273e081cbc15e3afeba098f067a32cd4db4771/apps/infer.py#L71 to

       os.environ['CUDA_VISIBLE_DEVICES'] = "0,1,2,3,4,5,6,7"
    

    Then I run

     python infer.py -cfg ../configs/icon-filter.yaml -gpu 1 -in_dir ../examples -out_dir ../results -hps_type pixie
    

    But errors occur. 1 Then I find out that the implementation of point_to_mesh_distance in kaolin use .cuda() to force tensors on gpu0.
    https://github.com/YuliangXiu/ICON/blob/53273e081cbc15e3afeba098f067a32cd4db4771/lib/dataset/mesh_util.py#L282 So I modify https://github.com/NVIDIAGameWorks/kaolin/blob/54d8fa438f8987444637f80da02bb0b862d3694d/kaolin/metrics/trianglemesh.py#L116-L118 to

            min_dist = torch.zeros((num_points), dtype=points.dtype).to(points.device)
            min_dist_idx = torch.zeros((num_points), dtype=torch.long).to(points.device)
            dist_type = torch.zeros((num_points), dtype=torch.int32).to(points.device)
    

    and reinstall kaolin from local. Then I meet one of the following errors. 2 or 3 Using gpu 2, gpu 3,... will also trigger the error. Could you help me to solve these errors.

  • Dimension Mismatch Issue in running on local pc

    Dimension Mismatch Issue in running on local pc

    Hey @YuliangXiu , I tried to setup the complete dependency on my Ubuntu 18.04 PC (pytorch 1.6,Cuda 10.1) with all the dependencies in the requirements.txt one by one. Actually faced a lot of issues in the above procedure. After that I was getting issue in loading model in rembg module. So I manually downloaded the model file and modified the rembg accordingly. It takes corrected the process_image function in lib/pymaf/utils/imutils.py.

    This produces hps_img having shape [3,224,224] in my case, that is being further fed to pymaf_net.py in line 282 to extract features using the defined backbone (res50). But this backbone expects input size as [64, 3, 7, 7]. And that's why i'm getting dimension mismatch runtime error.

    Note:- I have modified the image_to_pymaf_tensor in get_transformer() from lib/pymaf/utils/imutils.py as per my pytorch version .

    image_to_pymaf_tensor = transforms.Compose([
            transforms.ToPILImage(),                   #Added by us
            transforms.Resize(224),
            transforms.ToTensor(),                     #Added by us
            transforms.Normalize(mean=constants.IMG_NORM_MEAN,
                                 std=constants.IMG_NORM_STD)
        ])
    
    ICON:
    [w/ Global Image Encoder]: True
    [Image Features used by MLP]: ['normal_F', 'normal_B']
    [Geometry Features used by MLP]: ['sdf', 'norm', 'vis', 'cmap']
    [Dim of Image Features (local)]: 6
    [Dim of Geometry Features (ICON)]: 7
    [Dim of MLP's first layer]: 13
    
    initialize network with xavier
    initialize network with xavier
    Resume MLP weights from ../data/ckpt/icon-filter.ckpt
    Resume normal model from ../data/ckpt/normal.ckpt
    Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
    Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
    Dataset Size: 2
      0%|                                                                                                                                             | 0/2 [00:00<?, ?it/s]*********************************
    img_np shape: (512, 512, 3)
    img_hps shape: torch.Size([3, 224, 224])
    input shape x in pymaf_net : torch.Size([3, 224, 224])
    input shape x in hmr : torch.Size([3, 224, 224])
      0%|                                                                                                                                             | 0/2 [00:01<?, ?it/s]
    Traceback (most recent call last):
      File "infer.py", line 97, in <module>
        for data in pbar:
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/tqdm/std.py", line 1130, in __iter__
        for obj in iterable:
      File "../lib/dataset/TestDataset.py", line 166, in __getitem__
        preds_dict = self.hps(img_hps.to(self.device))
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "../lib/pymaf/models/pymaf_net.py", line 285, in forward
        s_feat, g_feat = self.feature_extractor(x)
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "../lib/pymaf/models/hmr.py", line 159, in forward
        x = self.conv1(x)
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 419, in forward
        return self._conv_forward(input, self.weight)
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 416, in _conv_forward
        self.padding, self.dilation, self.groups)
    RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 224, 224] instead
    
    

    Please suggest your view on the same.

  • Error while running demo colab

    Error while running demo colab

    Hey @YuliangXiu , I'm facing the below error after running the demo script,

    %cd /content/ICON/apps
    !source activate icon && python infer.py -cfg ../configs/icon-filter.yaml -loop_smpl 100 -loop_cloth 100 -colab -gpu 0
    
    Traceback (most recent call last):
      File "infer.py", line 87, in <module>
        dataset = TestDataset(
      File "/content/ICON/apps/../lib/dataset/TestDataset.py", line 73, in __init__
        self.hps = pymaf_net(path_config.SMPL_MEAN_PARAMS,
      File "/content/ICON/apps/../lib/pymaf/models/pymaf_net.py", line 361, in pymaf_net
        model = PyMAF(smpl_mean_params, pretrained)
      File "/content/ICON/apps/../lib/pymaf/models/pymaf_net.py", line 207, in __init__
        Regressor(feat_dim=ref_infeat_dim,
      File "/content/ICON/apps/../lib/pymaf/models/pymaf_net.py", line 36, in __init__
        self.smpl = SMPL(SMPL_MODEL_DIR, batch_size=64, create_transl=False)
      File "/content/ICON/apps/../lib/pymaf/models/smpl.py", line 23, in __init__
        super().__init__(*args, **kwargs)
      File "/usr/local/envs/icon/lib/python3.8/site-packages/smplx/body_models.py", line 149, in __init__
        data_struct = Struct(**pickle.load(smpl_file,
    _pickle.UnpicklingError: invalid load key, '\x03'.
    

    I tried the below code snippet and it's working fine in colab cell,

    import pickle
    with open('/content/ICON/data/smpl_related/models/smpl/SMPL_FEMALE.pkl','rb') as f1:
      d1=pickle.load(f1,encoding='latin1')
    print(d1) 
    

    Can anyone please suggest to tackle this issue?

  • Modified smplx or cannot import name 'ModelOutput' from 'smplx.body_models'

    Modified smplx or cannot import name 'ModelOutput' from 'smplx.body_models'

    Hi Yuliang, Thanks for releasing the ICON code and adding instructions for building/testing.

    I am having the issue shown in the attached image. In the instructions it is pointed out to clone smplx from [email protected]:YuliangXiu/smplx.git but that repo does not exist.

    The original smplx from https://github.com/vchoutas/smplx does contain a 'ModelOutput' class but not with the properties you are using in your repo/code ergo the error I am getting when testing the ICON example. Screenshot from 2022-02-01 17-59-31

  • smpl parameters question

    smpl parameters question

    hello i have a question about the shape of optimized_pose. The standard SMPL model takes 72 pose parameters but here the shape is [1, 23, 3, 3 ], how to convert to [1,72] shape?

  • OpenGL.raw.EGL._errors.EGLError: EGLError( )

    OpenGL.raw.EGL._errors.EGLError: EGLError( )

    When I run the command of "bash render_batch.sh debug", it gives an error as following

    OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_NOT_INITIALIZED, baseOperation = eglInitialize, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7f7b3d0ee2c0>, <importlib._bootstrap.LP_c_int object at 0x7f7b3d0ee440>, <importlib._bootstrap.LP_c_int object at 0x7f7b3d106bc0>, ), result = 0 )

    How can I fix this?

  • add web demo/model to Huggingface

    add web demo/model to Huggingface

    Hi, would you be interested in adding ICON to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/salesforce/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"
This repository contains the code for the paper

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

May 19, 2022
Implementation for the paper SMPLicit: Topology-aware Generative Model for Clothed People (CVPR 2021)
Implementation for the paper SMPLicit: Topology-aware Generative Model for Clothed People (CVPR 2021)

SMPLicit: Topology-aware Generative Model for Clothed People [Project] [arXiv] License Software Copyright License for non-commercial scientific resear

May 22, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

May 20, 2022
Cross-modal Deep Face Normals with Deactivable Skip Connections
 Cross-modal Deep Face Normals with Deactivable Skip Connections

Cross-modal Deep Face Normals with Deactivable Skip Connections Victoria Fern谩ndez Abrevaya*, Adnane Boukhayma*, Philip H. S. Torr, Edmond Boyer (*Equ

May 13, 2022
Source code of the paper PatchGraph: In-hand tactile tracking with learned surface normals.

PatchGraph This repository contains the source code of the paper PatchGraph: In-hand tactile tracking with learned surface normals. Installation Creat

Mar 9, 2022
This repository contains a pytorch implementation of "StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision".
This repository contains a pytorch implementation of

StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision | Project Page | Paper | This repository contains a pytorch implementation of "St

May 8, 2022
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images This repository contains the implementation of our paper MetaAvatar: Learni

May 18, 2022
SVG Icon processing tool for C++

BAWR This is a tool to automate the icons generation from sets of svg files into fonts and atlases. The main purpose of this tool is to add it to the

May 6, 2022
OpenCV, MediaPipe Pose Estimation, Affine Transform for Icon Overlay
OpenCV, MediaPipe Pose Estimation, Affine Transform for Icon Overlay

Yoga Pose Identification and Icon Matching Project Goal Detect yoga poses performed by a user and overlay a corresponding icon image. Running the main

Dec 3, 2021
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Jan 27, 2022
The 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held on June 2022 in conjunction with CVPR 2022.
The 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held on June 2022 in conjunction with CVPR 2022.

NTIRE 2022 - Image Inpainting Challenge Important dates 2022.02.01: Release of train data (input and output images) and validation data (only input) 2

May 16, 2022
Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror", CVPR 2021 oral
Code for

Reconstructing 3D Human Pose by Watching Humans in the Mirror Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou CVPR 2021 Oral The videos a

May 16, 2022
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch >= 1.2.0 P

Mar 22, 2022
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

May 16, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
 This repository contains the code for the CVPR 2020 paper

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

May 19, 2022
Deep Learning for humans
Deep Learning for humans

Keras: Deep Learning for Python Under Construction In the near future, this repository will be used once again for developing the Keras codebase. For

May 23, 2022
Topic Modelling for Humans

gensim 鈥 Topic Modelling in Python Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Targ

May 21, 2022
Machine Learning toolbox for Humans

Reproducible Experiment Platform (REP) REP is ipython-based environment for conducting data-driven research in a consistent and reproducible way. Main

May 6, 2022
Deep Learning for humans
Deep Learning for humans

Keras: Deep Learning for Python Under Construction In the near future, this repository will be used once again for developing the Keras codebase. For

Feb 12, 2021