Stratified Transformer for 3D Point Cloud Segmentation (CVPR 2022)

Stratified Transformer for 3D Point Cloud Segmentation

Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia

This is the official PyTorch implementation of our paper Stratified Transformer for 3D Point Cloud Segmentation that has been accepted to CVPR 2022. [arXiv]

Highlight

  1. Our method (Stratified Transformer) achieves the state-of-the-art performance on 3D point cloud semantic segmentation on both S3DIS and ScanNetv2 datasets. It is the first time for a point-based method to outperform the voxel-based ones, such as SparseConvNet and MinkowskiNet;
  2. Stratified Transformer is point-based, and constructed by Transformer with standard multi-head self-attention, enjoying large receptive field, robust generalization ability as well as competitive performance;
  3. This repository develops a memory-efficient implementation to combat the issue of variant-length tokens with several CUDA kernels, avoiding unnecessary momery occupation of vacant tokens. We also use shared memory for further acceleration.

Get Started

Environment

Install dependencies (we recommend using conda and pytorch>=1.8.0 for quick installation, but 1.6.0+ should work with this repo)

# install torch_points3d

# If you use conda and pytorch>=1.8.0, (this enables quick installation)
conda install pytorch-cluster -c pyg
conda install pytorch-sparse -c pyg
conda install pyg -c pyg
pip install torch_points3d

# Otherwise,
pip install torch_points3d

Install other dependencies

pip install tensorboard timm termcolor tensorboardX

If you meet issues with the above commands, you can also directly install the environment via pip install -r requirements.txt.

Make sure you have installed gcc and cuda, and nvcc can work (Note that if you install cuda by conda, it won't provide nvcc and you should install cuda manually.). Then, compile and install pointops2 as follows. (We have tested on gcc==7.5.0 and cuda==10.1)

cd lib/pointops2
python3 setup.py install

Datasets Preparation

S3DIS

Please refer to https://github.com/yanx27/Pointnet_Pointnet2_pytorch for S3DIS preprocessing. Then modify the data_root entry in the .yaml configuration file.

ScanNetv2

Please refer to https://github.com/dvlab-research/PointGroup for the ScanNetv2 preprocessing. Then change the data_root entry in the .yaml configuration file accordingly.

Training

S3DIS

  • Stratified Transformer
python3 train.py --config config/s3dis/s3dis_stratified_transformer.yaml
  • 3DSwin Transformer (The vanilla version shown in our paper)
python3 train.py --config config/s3dis/s3dis_swin3d_transformer.yaml

ScanNetv2

  • Stratified Transformer
python3 train.py --config config/scannetv2/scannetv2_stratified_transformer.yaml
  • 3DSwin Transformer (The vanilla version shown in our paper)
python3 train.py --config config/scannetv2/scannetv2_swin3d_transformer.yaml

Note: It is normal to see the the results on S3DIS fluctuate between -0.5% and +0.5% mIoU maybe because the size of S3DIS is relatively small, while the results on ScanNetv2 are relatively stable.

Testing

For testing, first change the model_path, save_folder and data_root_val (if applicable) accordingly. Then, run the following command.

python3 test.py --config [YOUR_CONFIG_PATH]

Pre-trained Models

For your convenience, you can download the pre-trained models and training/testing logs from Here.

Citation

If you find this project useful, please consider citing:

@inproceedings{lai2022stratified,
  title     = {Stratified Transformer for 3D Point Cloud Segmentation},
  author    = {Xin Lai, Jianhui Liu, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia},
  booktitle = {CVPR},
  year      = {2022}
}
Comments
  • Loss Nan

    Loss Nan

    Hi, I'm reproducing your excellent work, the loss is reached to the nan with obtained 0 mIou. Is the learning rate set incorrectly´╝č

    aug: True base_lr: 0.006 batch_size: 8 batch_size_test: 4 batch_size_val: 4 channels: [48, 96, 192, 384] classes: 13 concat_xyz: True data_name: s3dis data_root: ../../Dataset/S3DIS/trainval_fullarea depths: [2, 2, 6, 2] dist_backend: nccl dist_url: tcp://127.0.0.1:58501 distributed: True downsample_scale: 8 drop_path_rate: 0.3 drop_rate: 0.5 epochs: 50 eval_freq: 1 evaluate: True fea_dim: 6 grid_size: 0.04 grid_sizes: [0.04, 0.08, 0.16, 0.32] ignore_label: 255 jitter_clip: 0.02 jitter_sigma: 0.005 k: 16 loop: 30 manual_seed: 123 max_batch_points: 140000 max_num_neighbors: 34 model_path: None momentum: 0.9 multiplier: 0.1 multiprocessing_distributed: True names_path: data/s3dis/s3dis_names.txt ngpus_per_node: 4 num_heads: [3, 6, 12, 24] num_layers: 4 optimizer: AdamW patch_size: 0.04 print_freq: 1 quant_size: 0.01 quant_sizes: [0.01, 0.02, 0.04, 0.08] rank: 0 ratio: 0.25 rel_key: True rel_query: True rel_value: True resume: exp/s3dis/stratified_transformer/model/model_last.pth save_folder: None save_freq: 1 save_path: exp/s3dis/stratified_transformer/model scheduler: MultiStep scheduler_update: epoch split: val start_epoch: 0 stem_transformer: True step_epoch: 30 sync_bn: True test_area: 5 test_gpu: [0] test_list: data/s3dis/list/val5.txt test_list_full: data/s3dis/list/val5_full.txt test_workers: 4 train_gpu: [0, 1, 2, 3] transformer_lr_scale: 0.1 up_k: 3 use_amp: True use_xyz: True voxel_max: 80000 voxel_size: 0.04 warmup: linear warmup_iters: 1500 warmup_ratio: 1e-06 weight: None weight_decay: 0.01 window_size: [0.16, 0.32, 0.64, 1.28] workers: 4 world_size: 4

    [06/02 00:16:07 main-logger]: Train result at epoch [23/50]: mIoU/mAcc/allAcc 0.0130/0.0769/0.1693. WARNING:root:NaN or Inf found in input tensor. [06/02 00:16:07 main-logger]: >>>>>>>>>>>>>>>> Start Evaluation >>>>>>>>>>>>>>>> [06/02 00:16:28 main-logger]: Test: [1/17] Data 4.412 (4.412) Batch 21.110 (21.110) Loss nan (nan) Accuracy 0.2793. [06/02 00:16:31 main-logger]: Test: [2/17] Data 0.002 (2.207) Batch 3.324 (12.217) Loss nan (nan) Accuracy 0.1809. [06/02 00:16:36 main-logger]: Test: [3/17] Data 0.001 (1.472) Batch 4.249 (9.561) Loss nan (nan) Accuracy 0.1969. [06/02 00:16:38 main-logger]: Test: [4/17] Data 0.001 (1.104) Batch 2.364 (7.762) Loss nan (nan) Accuracy 0.1815. [06/02 00:16:48 main-logger]: Test: [5/17] Data 0.001 (0.883) Batch 10.526 (8.314) Loss nan (nan) Accuracy 0.1848. [06/02 00:17:02 main-logger]: Test: [6/17] Data 0.001 (0.736) Batch 13.099 (9.112) Loss nan (nan) Accuracy 0.1507. [06/02 00:17:08 main-logger]: Test: [7/17] Data 0.001 (0.631) Batch 5.935 (8.658) Loss nan (nan) Accuracy 0.1931. [06/02 00:17:09 main-logger]: Test: [8/17] Data 0.001 (0.553) Batch 1.564 (7.771) Loss nan (nan) Accuracy 0.1636. [06/02 00:17:10 main-logger]: Test: [9/17] Data 0.001 (0.491) Batch 1.386 (7.062) Loss nan (nan) Accuracy 0.1597. [06/02 00:17:13 main-logger]: Test: [10/17] Data 0.000 (0.442) Batch 2.654 (6.621) Loss nan (nan) Accuracy 0.1734. [06/02 00:17:15 main-logger]: Test: [11/17] Data 0.001 (0.402) Batch 2.049 (6.205) Loss nan (nan) Accuracy 0.2362. [06/02 00:17:26 main-logger]: Test: [12/17] Data 0.001 (0.369) Batch 11.297 (6.630) Loss nan (nan) Accuracy 0.1980. [06/02 00:17:29 main-logger]: Test: [13/17] Data 0.002 (0.340) Batch 2.617 (6.321) Loss nan (nan) Accuracy 0.1651. [06/02 00:17:55 main-logger]: Test: [14/17] Data 0.001 (0.316) Batch 25.620 (7.700) Loss nan (nan) Accuracy 0.1804. [06/02 00:18:04 main-logger]: Test: [15/17] Data 0.001 (0.295) Batch 9.624 (7.828) Loss nan (nan) Accuracy 0.1890. [06/02 00:18:06 main-logger]: Test: [16/17] Data 0.001 (0.277) Batch 1.992 (7.463) Loss nan (nan) Accuracy 0.1893. [06/02 00:18:09 main-logger]: Test: [17/17] Data 0.001 (0.260) Batch 2.720 (7.184) Loss nan (nan) Accuracy 0.1934. [06/02 00:18:10 main-logger]: Val result: mIoU/mAcc/allAcc 0.0146/0.0769/0.1903. [06/02 00:18:10 main-logger]: Class_0 Result: iou/accuracy 0.1903/1.0000. [06/02 00:18:10 main-logger]: Class_1 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_2 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_3 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_4 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_5 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_6 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_7 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_8 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_9 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_10 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_11 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: Class_12 Result: iou/accuracy 0.0000/0.0000. [06/02 00:18:10 main-logger]: <<<<<<<<<<<<<<<<< End Evaluation <<<<<<<<<<<<<<<<< WARNING:root:NaN or Inf found in input tensor. [06/02 00:18:10 main-logger]: Saving checkpoint to: exp/s3dis/stratified_transformer/model/model_last.pth [06/02 00:18:10 main-logger]: lr: [0.006, 0.0006000000000000001] batch_size shortened from 2 to 1, points from 156591 to 80000 [06/02 00:18:20 main-logger]: Epoch: [24/50][1/765] Data 4.474 (4.474) Batch 10.469 (10.469) Remain 60:03:51 Loss nan Lr: [0.006, 0.0006] Accuracy 0.1705.

  • CUDA error: device-side assert triggered

    CUDA error: device-side assert triggered

    Hi, authors,

    Thanks a lot for your awesome work.

    I met this error, have you ever met it?

    RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/mmvc/anaconda3/envs/pytorch19/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/mmvc/anaconda3/envs/pytorch19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/mmvc/Congcong/Stratified-Transformer/model/stratified_transformer.py", line 438, in forward feats = layer(feats, xyz, batch, neighbor_idx) File "/home/mmvc/anaconda3/envs/pytorch19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/mmvc/Congcong/Stratified-Transformer/model/stratified_transformer.py", line 357, in forward feats = self.kpconv(xyz, xyz, neighbor_idx, feats) File "/home/mmvc/anaconda3/envs/pytorch19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/mmvc/anaconda3/envs/pytorch19/lib/python3.8/site-packages/torch_points3d/modules/KPConv/kernels.py", line 83, in forward new_feat = KPConv_ops( File "/home/mmvc/anaconda3/envs/pytorch19/lib/python3.8/site-packages/torch_points3d/modules/KPConv/convolution_ops.py",line 95, in KPConv_ops neighborhood_features = gather(features, neighbors_indices) File "/home/mmvc/anaconda3/envs/pytorch19/lib/python3.8/site-packages/torch_points3d/core/common_modules/gathering.py", line 10, in gather idx[idx == -1] = x.shape[0] - 1 # Shadow point RuntimeError: CUDA error: device-side assert triggered

  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    Hi:

    I have install the environment now, but I have met an error. I use one GPU in the .yaml file. I have met the following error.
    

    Segmentation fault (core dumped) Have you ever met this error?

    Best.

  • Does anyone come across the problems of gcc version

    Does anyone come across the problems of gcc version

    _OSError: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /home/liy0r/anaconda3/envs/stformer/lib/python3.7/site-packages/torch_spline_conv/basis_cuda.so)

  • How to write test results as ply

    How to write test results as ply

    Hi, I was lucky to try your code and test it on the S3DIS dataset, also I noticed the presence of "write_ply_color" and "write_ply_rgb" but it is not obvious how to be used in the test code so, if there a way to do so, it would be appreciated. specifically, how can points argument be passed to the functions

  • Can anyone run the code successfully

    Can anyone run the code successfully

    Dear all, Does someone successfully install the environment? Can you share with me your version of each third-party library? It looks like there are some libs conflicts when I installed following by the readme files

  • stratified_transformer.py, stratified calss , forward function inputs ?

    stratified_transformer.py, stratified calss , forward function inputs ?

    forward function for class Stratified takes these variables as input: features, xyz, offset and batch could you please explain what are these inputs refer to ? I wanna apply this architecture to a different computer vision task with different dataset, I know what are the features and xyz but what is offset?

    thank you so much in advance..

  • some question about 'n_max'

    some question about 'n_max'

    Hi @X-Lai , Thanks for the released code.

    I am just wondering that why the 'n_max' need to be less than 1024, as you coded in class AttentionStep1_v2(Function). When I note the code 'assert n_max <= 1024', there will be an CUDA error: invalid configuration argument. (I think it is due to the limitation for 'n_max')

    Sorry that I'm not familiar with CUDA C++ coding. So, is there some solution to break the limitation for 'n_max'? Could you give me some advice?

    It could be very nice that you leave me a email address, which could be convenient for discussion. Here is mine: [email protected]

    Wish every success in your work.

  • Low training efficiency and warning

    Low training efficiency and warning "batch_size shortened from 8 to 1, points from 640000 to 80000."

    Hello!

    Thanks for your great work. I tried to implement your code in my workstation.

    I found that the max_batch_points and voxel_max were set to 140000 and 80000 in s3dis_stratified_transformer.yaml, respectively.

    And batch_size was set to 8. In this case, the number of points in one batch can easily exceed 14w, say, the maximum is 8w*8=64w.

    Maybe that's the reason why I often got this kind of warning,

    "WARNING [main-logger]: batch_size shortened from 8 to 1, points from 640000 to 80000."

    If I set the batch size to 1, the warning disappears, but less batch size brings more steps. Since I observed a low training speed per step, the overall training efficiency per epoch was poor with the whole S3DIS dataset. Do you have any suggestions to improve it?

    I trained some other models like KPConv in my workstation. The training speed was higher and acceptable.

    May I know whether the transformer model is relatively low in training efficiency? I am a layman in ML.

    Cheers,

    Eric.

  • Where is the paper

    Where is the paper

    Hi, thanks for your code repo, but the paper 'Stratified Transformer for 3D Point Cloud Segmentation' was not found on google, maybe it is not public yet now, could you please attach a PDF of that?

  • Training stratified transformer on other datasets

    Training stratified transformer on other datasets

    Hi, I have a question.

    If I train this stratified transformer on my own dataset, what parameters should I change to achieve relatively good performance?

    Best regards,

    Eric

  • Environmental problem

    Environmental problem

    Thanks for all you have contributed to the open community! I encountered a familiar bug with #36 when run the command python3 train.py --config config/s3dis/s3dis_stratified_transformer.yaml, which is detailed as follows: `Traceback (most recent call last): File "train.py", line 21, in from util.s3dis import S3DIS File "/root/Stratified-Transformer/util/s3dis.py", line 8, in from util.voxelize import voxelize File "/root/Stratified-Transformer/util/voxelize.py", line 4, in from torch_geometric.nn import voxel_grid File "/opt/anaconda3/envs/st/lib/python3.7/site-packages/torch_geometric/init.py", line 5, in import torch_geometric.data File "/opt/anaconda3/envs/st/lib/python3.7/site-packages/torch_geometric/data/init.py", line 1, in from .data import Data File "/opt/anaconda3/envs/st/lib/python3.7/site-packages/torch_geometric/data/data.py", line 8, in from torch_sparse import coalesce, SparseTensor File "/opt/anaconda3/envs/st/lib/python3.7/site-packages/torch_sparse/init.py", line 40, in from .storage import SparseStorage # noqa File "/opt/anaconda3/envs/st/lib/python3.7/site-packages/torch_sparse/storage.py", line 21, in class SparseStorage(object): File "/opt/anaconda3/envs/st/lib/python3.7/site-packages/torch/jit/_script.py", line 924, in script _compile_and_register_class(obj, _rcb, qualified_name) File "/opt/anaconda3/envs/st/lib/python3.7/site-packages/torch/jit/_script.py", line 64, in _compile_and_register_class torch._C._jit_script_class_compile(qualified_name, ast, defaults, rcb) RuntimeError: Arguments for call are not valid. The following variants are available:

    aten::div.Tensor(Tensor self, Tensor other) -> (Tensor): Expected a value of type 'Tensor' for argument 'other' but instead found type 'int'.

    aten::div.Scalar(Tensor self, Scalar other) -> (Tensor): Keyword argument rounding_mode unknown.

    aten::div.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)): Expected a value of type 'Tensor' for argument 'other' but instead found type 'int'.

    aten::div.int(int a, int b) -> (float): Keyword argument rounding_mode unknown.

    aten::div.float(float a, float b) -> (float): Expected a value of type 'float' for argument 'b' but instead found type 'int'.

    aten::div(Scalar a, Scalar b) -> (float): Keyword argument rounding_mode unknown.

    div(float a, Tensor b) -> (Tensor): Expected a value of type 'Tensor' for argument 'b' but instead found type 'int'.

    div(int a, Tensor b) -> (Tensor): Expected a value of type 'Tensor' for argument 'b' but instead found type 'int'.

    The original call is: File "/opt/anaconda3/envs/st/lib/python3.7/site-packages/torch_sparse/storage.py", line 316 idx = self.sparse_size(1) * self.row() + self.col()

        row = torch.div(idx, num_cols, rounding_mode='floor')
              ~~~~~~~~~ <--- HERE
        col = idx % num_cols
        assert row.dtype == torch.long and col.dtype == torch.long`
    

    I use the same enviroment with requirements.txt, which torch==1.7.1, gcc==7, torch_sparse==0.6.12, torch_points3d==1.3.0. what should I do to fix the problem? Thanks again!

  • Training stops in epoch 10 with Assertion Error

    Training stops in epoch 10 with Assertion Error

    Hey,

    So I was training Stratified Transformer on my own dataset. Till epoch 10 everything worked fine and nothing was going wrong. But during epoch 10 this error is thrown below. It is related to the relative_position_index here in the position encoding part, where it checks if all the indices are positive. I am just thinking why would any index be negative in the first place. A little more information, I am using global coordinates of points and am also doing some custom augmentations which flip and rotate the point cloud. During the first 10 epochs, the validation results are better than with simple augs, so the augs are working well.

    As a solution, I thought if I made sure that the relative position here is always positive by taking absolute values of this subtraction, how would that affect the training. Error Image

    Let me know what you think

  • Environment Version

    Environment Version

    I have a problem with the installation of pyg. Here is my environment: python=3.7, pytorch=1.8.0, cudatoolkit=11.1, other packages are default version.

    Console returns: '' Package libstdcxx-ng conflicts for: pytorch==1.8.0 -> cudatoolkit[version='>=11.1,<11.2'] -> libstdcxx-ng[version='>=10.3.0|>=7.5.0|>=7.3.0|>=7.2.0|>=11.2.0'] gmp -> libstdcxx-ng[version='>=7.2.0|>=7.3.0|>=7.5.0'] libtiff -> libstdcxx-ng[version='>=4.9|>=7.3.0|>=7.5.0|>=7.2.0'] torchvision==0.9.0 -> cudatoolkit[version='>=11.1,<11.2'] -> libstdcxx-ng[version='>=10.3.0|>=7.3.0|>=7.5.0|>=11.2.0|>=8.4.0|>=7.2.0']

    ....

    • pytorch==1.8.0 -> cudatoolkit[version='>=11.1,<11.2'] -> __glibc[version='>=2.17,<3.0.a0']
    • readline -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
    • scipy -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
    • sqlite -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
    • tk -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
    • torchvision==0.9.0 -> cudatoolkit[version='>=11.1,<11.2'] -> __glibc[version='>=2.17,<3.0.a0']
    • xz -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
    • zlib -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']

    Your installed version is: 2.31 '' Can you list the specific versions of your environments? Thanks very much.

  • compiling pointnet2 error

    compiling pointnet2 error

    could you please explain if I can run this code with cuda 10.0 pytorch 1.4 , my gpu driver does not support newer pytorch versions I have compiled pointnet2 but I got this error : error: command '/usr/bin/gcc' failed with exit code 1 my gcc version is 5.4.0 After compilation, I think this is the only module that are not created No module named "pointops2_cuda" is there any way to compile this lib without the need to upgrade my pytorch version? Thanks in advance

  • Error running train.py s3dis: The deleter and context arguments are mutually exclusive.

    Error running train.py s3dis: The deleter and context arguments are mutually exclusive.

    I have some errors when I run train.py based on your guideline on s3dis dataset. I also keep configuration files and install libraries like requirements.txt. Because I use Cuda 11.3, I install torch 1.10.0+cu113 and torch_geometric 1.10.0+cu113. Can you help me with this issue?

    [05/27 14:35:26 main-logger]: #Model parameters: 7069114 [05/27 14:35:28 main-logger]: augmentation all [05/27 14:35:28 main-logger]: jitter_sigma: 0.005, jitter_clip: 0.02 Totally 204 samples in train set. [05/27 14:35:28 main-logger]: train_data samples: '6120' Totally 67 samples in val set. [05/27 14:35:28 main-logger]: scheduler: MultiStep. scheduler_update: epoch. milestones: [60, 80], gamma: 0.1 [05/27 14:35:28 main-logger]: lr: [0.006, 0.0006000000000000001] WARNING [05/27 14:35:30 main-logger]: batch_size shortened from 2 to 1, points from 160000 to 80000 WARNING [05/27 14:35:30 main-logger]: batch_size shortened from 2 to 1, points from 160000 to 80000 WARNING [05/27 14:35:30 main-logger]: batch_size shortened from 2 to 1, points from 143440 to 63440 WARNING [05/27 14:35:31 main-logger]: batch_size shortened from 2 to 1, points from 157885 to 77885 WARNING [05/27 14:35:31 main-logger]: batch_size shortened from 2 to 1, points from 159069 to 80000 WARNING [05/27 14:35:31 main-logger]: batch_size shortened from 2 to 1, points from 160000 to 80000 torch.Size([108012, 3]) 0.0025000000000000005 WARNING [05/27 14:35:32 main-logger]: batch_size shortened from 2 to 1, points from 144512 to 80000 WARNING [05/27 14:35:32 main-logger]: batch_size shortened from 2 to 1, points from 140900 to 66523 WARNING [05/27 14:35:33 main-logger]: batch_size shortened from 2 to 1, points from 160000 to 80000 Traceback (most recent call last): File "train.py", line 543, in main() File "train.py", line 84, in main main_worker(args.train_gpu, args.ngpus_per_node, args) File "train.py", line 306, in main_worker loss_train, mIoU_train, mAcc_train, allAcc_train = train(train_loader, model, criterion, optimizer, epoch, scaler, scheduler) File "train.py", line 363, in train neighbor_idx = tp.ball_query(radius, args.max_num_neighbors, coord, coord, mode="partial_dense", batch_x=batch, batch_y=batch)[0] File "/home/pknu/anaconda3/envs/testPt/lib/python3.7/site-packages/torch_points_kernels/torchpoints.py", line 210, in ball_query return ball_query_partial_dense(radius, nsample, x, y, batch_x, batch_y, sort=sort) File "/home/pknu/anaconda3/envs/testPt/lib/python3.7/site-packages/torch_points_kernels/torchpoints.py", line 167, in ball_query_partial_dense ind, dist = tpcpu.batch_ball_query(x, y, batch_x, batch_y, radius, nsample, mode=0, sorted=sort) ValueError: The deleter and context arguments are mutually exclusive.

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Jun 27, 2022
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.
Implementation of the

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Jul 5, 2022
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

Jun 27, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Jul 1, 2022
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation

Few-shot 3D Point Cloud Semantic Segmentation Created by Na Zhao from National University of Singapore Introduction This repository contains the PyTor

Jun 15, 2022
Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)
Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021) This repository is for BAAF-Net introduce

Jul 1, 2022
Official source code of Fast Point Transformer, CVPR 2022
Official source code of Fast Point Transformer, CVPR 2022

Fast Point Transformer Project Page | Paper This repository contains the official source code and data for our paper: Fast Point Transformer Chunghyun

Jun 27, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Jun 22, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Jul 4, 2022
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.
Implementation of the

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Jun 21, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

Jun 28, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021
Code for

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

May 23, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Jun 22, 2022
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place

Jun 28, 2022
PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.
PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.

PointRCNN PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud Code release for the paper PointRCNN:3D Object Proposal Generation a

Jul 2, 2022
Pytorch implementation of PCT: Point Cloud Transformer

PCT: Point Cloud Transformer This is a Pytorch implementation of PCT: Point Cloud Transformer.

Jul 5, 2022
Jittor implementation of PCT:Point Cloud Transformer
Jittor implementation of PCT:Point Cloud Transformer

PCT: Point Cloud Transformer This is a Jittor implementation of PCT: Point Cloud Transformer.

Jul 6, 2022
GeoTransformer - Geometric Transformer for Fast and Robust Point Cloud Registration
GeoTransformer - Geometric Transformer for Fast and Robust Point Cloud Registration

Geometric Transformer for Fast and Robust Point Cloud Registration PyTorch imple

Jul 5, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

May 2, 2022