[ECCV2022] The PyTorch implementation of paper "Equivariance and Invariance Inductive Bias for Learning from Insufficient Data"

Related tags

Admin Panels EqInv
Overview

[ECCV2022] EqInv

This repository contains the official PyTorch implementation of paper "Equivariance and Invariance Inductive Bias for Learning from Insufficient Data".

Equivariance and Invariance Inductive Bias for Learning from Insufficient Data
Tan Wang, Qianru Sun, Sugiri Pranata, Karlekar Jayashree, Hanwang Zhang
European Conference on Computer Vision (ECCV), 2022
[Paper: Comming Soon] [Poster: Comming Soon] [Slides: Comming Soon]


EqInv Algorithm


From this project, you can:

  • Try our algorithm for data efficient learning tasks, for example, VIPriors Challenge.
  • Use the dataset or generate your own data with our script for evaluation.
  • Improve our equivariance and/or invariance idea and apply in your own project.

BibTex

If you find our codes helpful, please cite our paper:

@inproceedings{wang2022equivariance,
  title={Equivariance and invariance inductive bias for learning from insufficient data},
  author={Wang, Tan and Sun, Qianru and Pranata, Sugiri and Jayashree, Karlekar and Zhang, Hanwang},
  booktitle={European Conference on Computer Vision},
  year={2022}
}

Prerequisites

  • Python 3.7
  • PyTorch 1.9.0
  • tqdm
  • randaugment
  • opencv-python

Data Preparation

Please download dataset in this link and put it into the data folder.

Ps:

  • We also provide the dataset generation script here and you can generate any efficient learning dataset that you want.
  • Besides the train and val set, we also provide the testgt set for test accuracy evaluation. This can be achieved since VIPriors Challenge use the part of ImageNet Val set for testing.

Training

0. Main parameters (may need to be specified by user)

  • pretrain_path: path of SSL pretrained models
  • stage1_model: the model type of the SSL training stage.
  • num_shot: number of samples for each class in the dataset.
  • class_num: number of classes in the dataset.
  • activate_type: the activation type added on the mask.
  • inv_start: when to add the invariance regularization.
  • inv_weight: the weight of the invariance regularization.
  • opt_mask: if optimize the mask.

1. Run the baseline model#1 —— Training From Scratch

CUDA_VISIBLE_DEVICES=0,1,2,3 python baseline.py -b 256 --name vipriors10_rn50 -j 8 --lr 0.1 data/imagenet_10

You can also try the built-in augmentation algorithms, such as Mixup

CUDA_VISIBLE_DEVICES=0,1,2,3 python baseline.py -b 256 --name vipriors10_rn50_mixup -j 8 --lr 0.1 data/imagenet_10 --mixup

2. Run the baseline model#2 —— Training From SSL

CUDA_VISIBLE_DEVICES=4,5,6,7 python baseline_eq_ipirm.py -b 256 --name vipriors10_rn50_lr0.1_ipirm -j 8 --lr 0.1 data/imagenet_10 --pretrain_path phase1_ssl_methods/run_imagenet10/ipirm_imagenet10/model_ipirm.pth

For the SSL pretraining process, please follow the chapter below.


3. Run our EqInv model

Step-1: SSL Pretraining (Equivariance Learning)

Please follow the original codebase. We list the code we used below:

Please put the pretrained models in phase1_ssl_methods. You can also choose to directly use our SSL pretrained models (IP-IRM) here


Step-2/3: Downstream Fine-tuning (Invariance Learning)

Running Commands

CUDA_VISIBLE_DEVICES=4,5,6,7 python vipriors_eqinv.py -b 128  --name vipriors10_ipirm_mask_sigmoid_rex100._start10 -j 24 data/imagenet_10 --pretrain_path phase1_ssl_methods/run_imagenet10/ipirm_imagenet10/model_ipirm.pth --inv rex --inv_weight 100. --opt_mask --activat_type sigmoid --inv_start 10 --mlp --stage1_model ipirm --num_shot 10

You can also adopt Random Augmentation to achieve better results:

CUDA_VISIBLE_DEVICES=4,5,6,7 python vipriors_eqinv.py -b 128  --name vipriors10_ipirm_mask_sigmoid_rex10._start10_randaug -j 24 data/imagenet_10 --pretrain_path phase1_ssl_methods/run_imagenet10/ipirm_imagenet10/model_ipirm.pth --inv rex --inv_weight 10. --opt_mask --activat_type sigmoid --inv_start 10 --mlp --stage1_model ipirm --num_shot 10 --random_aug

For other dataset, you can try:

CUDA_VISIBLE_DEVICES=0,1,2,3 python vipriors_eqinv.py -b 128  --name vipriors20_ipirm_mask_sigmoid_rex10._start10_randaug -j 24 data/imagenet_20 --pretrain_path phase1_ssl_methods/run_imagenet20/ipirm_imagenet20/model_ipirm.pth --inv rex --inv_weight 10. --opt_mask --activat_type sigmoid --inv_start 10 --mlp --stage1_model ipirm --num_shot 20 --random_aug
CUDA_VISIBLE_DEVICES=0,1,2,3 python vipriors_eqinv.py -b 128  --name vipriors50_ipirm_mask_sigmoid_rex10._start10_randaug -j 24 data/imagenet_50 --pretrain_path phase1_ssl_methods/run_imagenet50/ipirm_imagenet50/model_ipirm.pth --inv rex --inv_weight 10. --opt_mask --activat_type sigmoid --inv_start 10 --mlp --stage1_model ipirm --num_shot 50 --random_aug

If you have any questions, please feel free to email me ([email protected]).

You might also like...

PyTorch implementation of SSQL (Accepted to ECCV2022 oral presentation)

PyTorch implementation of SSQL (Accepted to ECCV2022 oral presentation)

SSQL-ECCV2022 Official code for Synergistic Self-supervised and Quantization Learning (Accepted to ECCV 2022 oral presentation). Paper is now availabl

Aug 18, 2022

This repository is the official code of the ECCV2022 paper "Global Spectral Filter Memory Network for Video Object Segmentation"

This repository is the official code of the ECCV2022 paper

Global Spectral Filter Memory Network for Video Object Segmentation ECCV 2022 Abstract This paper studies semi-supervised video object segmentation th

Aug 27, 2022

Official code for ECCV2022 paper: Learning Series-Parallel Lookup Tables for Efficient Image Super-Resolution

SPLUT Official code for ECCV2022 paper: Learning Series-Parallel Lookup Tables for Efficient Image Super-Resolution The folder training_testing_code c

Sep 24, 2022

Code of ECCV2022 paper "Inverted Pyramid Multi-task Transformer for Dense Scene Understanding"

Code of ECCV2022 paper

🔥 ECCV2022 InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene Understanding 📜 Introduction This repository implements our ECCV2022 paper

Sep 20, 2022

Code for ECCV2022 paper 'KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo'

KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo Paper | Project Page | Data | Checkpoints Installation Clone this

Sep 28, 2022

Code for ECCV2022 Paper "Mining Cross-Person Cues for Body-Part Interactiveness Learning in HOI Detection"

Code for ECCV2022 Paper

Body-Part Map for Interactiveness This repo contains the official implementation of our paper: Mining Cross-Person Cues for Body-Part Interactiveness

Sep 22, 2022

Codes for ECCV2022 paper - contrastive deep supervision

Contrastive Deep Supervision This is the code for contrastive deep supervision and distilled contrastive deep supervision. Install. Install the based

Sep 27, 2022

Codes for ECCV2022 paper "What matters in supervised 3D scene flow"

Codes for ECCV2022 paper

What Matters for 3D Scene Flow Network (ECCV2022) This is the official implementation of our ECCV 2022 paper: "What Matters for 3D Scene Flow Network"

Sep 14, 2022

[ECCV2022] PyTorch code for SeqDeepFake: Detecting and Recovering Sequential DeepFake Manipulation

[ECCV2022] PyTorch code for SeqDeepFake: Detecting and Recovering Sequential DeepFake Manipulation

SeqDeepFake: Detecting and Recovering Sequential DeepFake Manipulation Rui Shao, Tianxing Wu, Ziwei Liu S-Lab, Nanyang Technological University  [Proj

Sep 22, 2022
Comments
  • Question about Eq. 6 in the paper and the implementation

    Question about Eq. 6 in the paper and the implementation

    Hi, thank you for sharing the implementation of your paper. The method provides a new and interesting perspective of learning from insufficient data. I have one question about Eq. (6) in paper and the implementation:

    In Eq. (6) in your paper, the class-wise IRM loss is composed of per-environment supervised contrastive loss and the penalty term. However in vipriors_eqinv.py, the final loss seems to compute an overall in-batch supervised contrastive loss (loss_cont), and the per-environment supervised contrastive loss is not taken into account in loss_inv. May I know the reason for this? Thanks a lot.

    opened by Terminator8758 2
Owner
Wang Tan
Ph.D. student of MreaL [email protected]
Wang Tan
[ECCV2022] Official PyTorch implementation of the paper "Outpainting by Queries"

QueryOTR Outpainting by Queries, ECCV 2022. ArXiv we propose a novel hybrid vision-transformer-based encoder-decoder framework, named Query Outpaintin

Kai Yao 16 Sep 20, 2022
Official Implementation of ECCV2022 paper "OSFormer: One-Stage Camouflaged Instance Segmentation with Transformers"

OSFormer: One-Stage Camouflaged Instance Segmentation with Transformers (ECCV 2022) Official Implementation of "OSFormer: One-Stage Camouflaged Instan

Pei Jialun 37 Sep 17, 2022
[ECCV2022] Official Implementation of paper "V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer"

V2X-ViT This is the official implementation of ECCV2022 paper "V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer". Install

Runsheng Xu 115 Sep 22, 2022
This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothing.

This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothing. The pytorc

MEGVII Research 13 Sep 1, 2022
Official implementation of the ECCV2022 paper: Audio-Visual Segmentation

Audio-Visual Segmentation [Project Page] [Arxiv] This repository provides the PyTorch implementation for the ECCV2022 paper "Audio-Visual Segmentation

null 345 Sep 23, 2022
[ECCV2022] Official implementation of the paper "View Vertically: A Hierarchical Network for Trajectory Prediction via Fourier Spectrums"

Codes for View Vertically: A Hierarchical Network for Trajectory Prediction via Fourier Spectrums Abstract Understanding and forecasting future trajec

conghaowong 20 Sep 24, 2022
PyTorch implementation of SSQL (Accepted to ECCV2022 oral presentation)

SSQL-ECCV2022 Official code for Synergistic Self-supervised and Quantization Learning (Accepted to ECCV 2022 oral presentation). Paper is now availabl

MEGVII Research 53 Sep 26, 2022
[ECCV2022] Official Pytorch Implementation of Object Discovery via Contrastive Learning for Weakly Supervised Object Detection

Object Discovery via Contrastive Learning for Weakly Supervised Object Detection Jinhwan Seo, Wonho Bae, Danica J. Sutherland, Junhyug Noh, and Daijin

Jinhwan Seo 9 Sep 22, 2022
The Official PyTorch Implementation for Face2Face^ρ (ECCV2022)

Face2Faceρ: Official Pytorch Implementation Environment CUDA 10.2 or above Python 3.8.5 pip install -r requirements.txt For visdom, some dependencies

null 93 Sep 16, 2022
Official Pytorch implementation of CCPL and SCTNet (ECCV2022, Oral)

CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer (ECCV 2022 Oral) Paper | Video Demo | Web Demo @inproceedings{wu2022ccpl, t

Jarrent Wu 80 Sep 21, 2022