VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training

Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training [Arxiv]

VideoMAE Framework

PWC
PWC
PWC
PWC

VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
Zhan Tong, Yibing Song, Jue Wang, Limin Wang
Nanjing University, Tencent AI Lab

📰 News

[2022.4.24] Code and pre-trained models are available now! Please give a star ⭐️ for our best efforts. 😆
[2022.4.15] The LICENSE of this project has been upgraded to CC-BY-NC 4.0.
[2022.3.24] Code and pre-trained models will be released here. Welcome to watch this repository for the latest updates.

Highlights

🔥 Masked Video Modeling for Video Pre-Training

VideoMAE performs the task of masked video modeling for video pre-training. We propose the extremely high masking ratio (90%-95%) and tube masking strategy to create a challenging task for self-supervised video pre-training.

⚡️ A Simple, Efficient and Strong Baseline in SSVP

VideoMAE uses the simple masked autoencoder and plain ViT backbone to perform video self-supervised learning. Due to the extremely high masking ratio, the pre-training time of VideoMAE is much shorter than contrastive learning methods (3.2x speedup). VideoMAE can serve as a simple but strong baseline for future research in self-supervised video pre-training.

😮 High performance, but NO extra data required

VideoMAE works well for video datasets of different scales and can achieve 84.7% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51. To our best knowledge, VideoMAE is the first to achieve the state-of-the-art performance on these four popular benchmarks with the vanilla ViT backbones while doesn't need any extra data or pre-trained models.

🚀 Main Results

Something-Something V2

Method Extra Data Backbone Frames x Clips x Crops Top-1 Top-5
VideoMAE no ViT-B 16x2x3 70.6 92.6
VideoMAE no ViT-L 16x2x3 74.2 94.7
VideoMAE no ViT-L 32x1x3 75.3 95.2

Kinetics-400

Method Extra Data Backbone Frames x Clips x Crops Top-1 Top-5
VideoMAE no ViT-B 16x5x3 80.9 94.7
VideoMAE no ViT-L 16x5x3 84.7 96.5
VideoMAE Kinetics-700 ViT-L 16x5x3 85.8 96.8

UCF101 & HMDB51

Method Extra Data Backbone UCF101 HMDB51
VideoMAE no ViT-B 90.8 61.1
VideoMAE Kinetics-400 ViT-B 96.1 73.3

🔨 Installation

Please follow the instructions in INSTALL.md.

➡️ Data Preparation

Please follow the instructions in DATASET.md for data preparation.

🔄 Pre-training

The pre-training instruction is in PRETRAIN.md.

⤴️ Fine-tuning with pre-trained models

The fine-tuning instruction is in FINETUNE.md.

📍 Model Zoo

We provide pre-trained and fine-tuned models in MODEL_ZOO.md.

👀 Visualization

We provide the script for visualization in vis.sh. Colab notebook for better visualization is coming soon.

☎️ Contact

Zhan Tong: [email protected]

👍 Acknowledgements

Thanks to Ziteng Gao, Lei Chen and Chongjian Ge for their kindly support.
This project is built upon MAE-pytorch and BEiT. Thanks to the contributors of these great codebases.

🔒 License

The majority of this project is released under the CC-BY-NC 4.0 license as found in the LICENSE file. Portions of the project are available under separate license terms: SlowFast and pytorch-image-models are licensed under the Apache 2.0 license. BEiT is licensed under the MIT license.

✏️ Citation

If you think this project is helpful, please feel free to give a star ⭐️ and cite our paper:

@article{videomae,
  title={VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training},
  author={Tong, Zhan and Song, Yibing and Wang, Jue and Wang, Limin},
  journal={arXiv preprint arXiv:2203.12602},
  year={2022}
}
Owner
Multimedia Computing Group, Nanjing University
Multimedia Computing Group, Nanjing University
Comments
  • release csv label files for ssv2

    release csv label files for ssv2

    Hi, congratulate on your great work! Could you release label files (train.csv val.csv test.csv) for ssv2? I tried to generate these files following the guidance in DATASET.md and use them for evaluation on ssv2 by running run_class_finetuning.py with --eval, but the accuracy is abnormal(too low), and I guess maybe the csv files I generated were wrong. Could you release the csv files of K400 and ssv2 for everyone to download? Thanks!

  • The learning rate for ssv2 dataset

    The learning rate for ssv2 dataset

    Hi, I have tried to reproduce the VideoMAE performance on SSV2 dataset. I run the experiments on four A100 machines (each includes eight 80G GPUs) and modify --nnodes=4, batch_size 64, such that our total batch size is the same. However, the performance is not consistent with the reported one. I checked the log you provided and notice that the learning rate is different. It seems that your learning rate is not 1.2e-3 (1.5e-4*2048/256) after the warm-up stage. Thanks a lot and I am looking forward to your reply.

  • encontered missing keys error while finetunening

    encontered missing keys error while finetunening

    I encontered missing keys error while finetunening, I resume from model pretrained with pretrain model and using it while finetune. how to solve this problem? image

  • about inference speed?

    about inference speed?

    Thanks for the great work! But I have one question that was not discussed in the paper, which is the inference speed.

    I understand that during training the speed is fast, since you masked out 90% of the patches. However, during test time, I assume you retain all the patches. Since the attention in ViT is quadratic in cost, does it mean you will need 10*10 = 100 times the flops compared to the pretrain phase?

    If that is the case, how much time does it take to do a single forward pass?

  • Request of a best pretrained checkpoint for downstream tasks

    Request of a best pretrained checkpoint for downstream tasks

    Thanks for you excellent work. With only self-supervised learning, the VideoMAE can perform surprisely well. Besides that, I think the (self-supervised + full-supervised) models are also very meaningful, especially for the downstream tasks/datasets.

    @yztongzhan Could you please providing a model/checkpoint that is the best pretrained model you can offer for learning video representation towards any unknown downstream video datasets?

    For example, a model pretrained on the Kinetic-700 with VideoMAE self-supervisly, and then pretrained on Kinetic-700 supervisely.

    Such a model should be very useful. One can easily fine-tune it on any downstream datasets for many possible purposes. Just like the ResNet-50 pretrained on ImageNet.

  • loss_scale_value has no key

    loss_scale_value has no key "scale"

    Hi author: Thanks for your impressive work, I run your code "run_mae_pretraining.py" and it comes an error that "engine_for_pretraining.py", line 79, in train_one_epoch loss_scale_value = loss_scaler.state_dict()["scale"]" , as shown below; p.s. I used the pretrained ssv2's checkpoint; 342b799fcc675fc5b1e8606ce8073f3dbe3f4e24

  • how to preprosessing?

    how to preprosessing?

    Hello. First, thank you for this repository.

    I'm trying to preprocess.

    I already looked at install.md and installed it. Now look at data.md and try to follow it. However, I already have the kinetics-400 downloaded before, so I just need to pre-process the data.

    i) Download the dataset from the official website.

    ii) Preprocess the dataset by resizing the short edges of the video to 320px. You can refer to the MMAction2 data benchmark for TSN and SlowOnly.

    iii) Create the annotations required by the dataloader ("<path_to_video> <video_class>" in the annotations). Comments usually include train.csv, val.csv, and test.csv (here val.csv is equivalent to test.csv). The file format *.csv is:

    I'm on the second of this course you mentioned, and I'm not sure how to do it. How do I pre-process?

    Thank you for read it!

  • Adding accumlate_iter.

    Adding accumlate_iter.

    I add a accumlate_iter in the pretrain process. This may help we have a same effecitve batchsize as the original one VideoMAE used with less GPUs. I should say 64 GPUs are not usually affordable.

  • The log file of pretrain and finetune stage

    The log file of pretrain and finetune stage

    Thanks for your contribution to this community! Could you provide the log file of pre-train stage and fine-tune stage for better reproduction? I'd appreciate it if you can provide the files.

  • ImportError: cannot import name 'VideoMAEFeatureExtractor' from 'transformers'

    ImportError: cannot import name 'VideoMAEFeatureExtractor' from 'transformers'

    Hi!

    tried to replicate the steps present in hugging face in google colab and got this ImportError: cannot import name 'VideoMAEFeatureExtractor' from 'transformers'.

    if this is a version issue, can you specify the version of transformers to be used ?

    thanks.

  • novograd optimizer removed from timm

    novograd optimizer removed from timm

    ModuleNotFoundError: No module named 'timm.optim.novograd'

    it seems like timm just gives you nvnovograd if you ask for novograd

    https://github.com/rwightman/pytorch-image-models/blob/master/timm/optim/optim_factory.py

    elif opt_lower == 'novograd' or opt_lower == 'nvnovograd':
        optimizer = NvNovoGrad(parameters, **opt_args)
    
  • Training receipt for AVA

    Training receipt for AVA

    Thanks for the excellent work!

    Could you provide the training configuration for the AVA dataset? I tried to reproduce the results but observed overfitting.

  • Pretraining time for UCF101

    Pretraining time for UCF101

    Hello author,

    I saw you mentioned there were 8 GPUs used for pretraining 9.5k UCF101 video clips. May I ask what the total pretraining time for pretraining UCF101 for 3200 epochs using batch size 192 (the setting in your paper) is?

    Appreciate it!

  • Linear probe experiment

    Linear probe experiment

    Is there a script or details available for linear probing? The original MAE paper suggests theres a substantial difference between setups for finetuning and linear probing for MAE models and wanted to be able to reproduce results.

    Thanks!

  • About preparing SthV2

    About preparing SthV2

    Hi, Thank you for your work!

    I read your 'DATASET.md'.

    Are there two key points in processing sthv2 datasets: the first is to change the suffix to MP4, the second is to reduce the short side to 320p? (and can only videos with an original height of 240p be zoomed out?) 中文:处理sthv2数据集是不是就2个核心要义:第一个是将后缀改为.mp4,第二个是将短边放缩至320p?(而且是不是只有原始高度为240p的视频才能被选中,然后再去放缩?)

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners This repository is built upon BEiT, thanks very much! Now, we on

Sep 21, 2022
An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners
 An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners

An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners This is a coarse version for MAE, only make the pretrain model, the fine

Sep 5, 2022
Re-implememtation of MAE (Masked Autoencoders Are Scalable Vision Learners) using PyTorch.

mae-repo PyTorch re-implememtation of "masked autoencoders are scalable vision learners". In this repo, it heavily borrows codes from codebase https:/

Dec 14, 2021
ConvMAE: Masked Convolution Meets Masked Autoencoders
ConvMAE: Masked Convolution Meets Masked Autoencoders

ConvMAE ConvMAE: Masked Convolution Meets Masked Autoencoders Peng Gao1, Teli Ma1, Hongsheng Li2, Jifeng Dai3, Yu Qiao1, 1 Shanghai AI Laboratory, 2 M

Sep 19, 2022
Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders
Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders

MultiMAE: Multi-modal Multi-task Masked Autoencoders Roman Bachmann*, David Mizrahi*, Andrei Atanov, Amir Zamir Website | arXiv | BibTeX Official PyTo

Sep 23, 2022
VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning

This is a release of our VIMPAC paper to illustrate the implementations. The pretrained checkpoints and scripts will be soon open-sourced in HuggingFace transformers.

Sep 15, 2022
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)
Official repository for the paper

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

Sep 21, 2022
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang
[CVPR 2021]

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

Aug 8, 2022
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Sep 24, 2022
The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

PRIMER The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. PRIMER is a pre-trained model for mu

Sep 12, 2022
The official codes of "Semi-supervised Models are Strong Unsupervised Domain Adaptation Learners".
The official codes of

SSL models are Strong UDA learners Introduction This is the official code of paper "Semi-supervised Models are Strong Unsupervised Domain Adaptation L

Aug 12, 2022
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"
PyTorch implementation of

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

Jul 30, 2022
UniLM AI - Large-scale Self-supervised Pre-training across Tasks, Languages, and Modalities

Pre-trained (foundation) models across tasks (understanding, generation and translation), languages (100+ languages), and modalities (language, image, audio, vision + language, audio + language, etc.)

Sep 26, 2022
Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification [pdf] The official repository for Self-Supervised Pre-Training for Transfo

Dec 3, 2021
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

Sep 9, 2022
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm

DeCLIP Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm. Our paper is available in arxiv Updates ** Ou

Sep 14, 2022
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

Oct 29, 2021
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

Aug 17, 2022
Contains code for the paper "Vision Transformers are Robust Learners".

Vision Transformers are Robust Learners This repository contains the code for the paper Vision Transformers are Robust Learners by Sayak Paul* and Pin

Sep 16, 2022