🔥OGC in PyTorch (NeurIPS 2022)

Overview

arXiv code visitors License CC BY-NC-SA 4.0 Twitter Follow

OGC: Unsupervised 3D Object Segmentation from Rigid Dynamics of Point Clouds (NeurIPS 2022)

Ziyang Song, Bo Yang

Overview

We propose the first unsupervised 3D object segmentation method, learning from dynamic motion patterns in point cloud sequences.

drawing

Our method demonstrates promising results on various scenarios:

  • Object part instance segmentation

drawing

  • Object segmentation in indoor scenes

drawing

  • Object segmentation in outdoor scenes

drawing

Full demo (Youtube)

1. Environment

Please first install a GPU-supported pytorch version which fits your machine. We have tested with pytorch 1.9.0.

Install PointNet2 CPP lib:

cd pointnet2
python setup.py install
cd ..

Install other dependencies:

pip install -r requirements

(Optional) Install Open3D for the visualization of point cloud segmentation:

pip install open3d

2. Data preparation

(1) SAPIEN

Please download from links provided by MultibodySync:

Then put them into your ${SAPIEN} path.

(2) OGC-DR (Dynamic Room) & OGC-DRSV (Single-View Dynamic Room)

Please download the complete datasets from links below:

Alternatively, you can generate the dataset by yourself.

OGC-DR: Please first download the ShapeNet Core v1. Select the archives according to object categories specified in data_prepare/ogcdr/meta.yaml and unzip them into your ${OGC_DR}/ShapeNet_mesh path. Then run the following script to generate the dataset.

python data_prepare/ogcdr/build_ogcdr.py ${OGC_DR}

OGC-DRSV: Run the following script to collect single depth scans on OGC-DR mesh models and generate incomplete point cloud dataset OGC-DRSV.

python data_prepare/ogcdrsv/build_ogcdrsv.py --src_root ${OGC_DR} --dest_root ${OGC_DRSV}

Collect groundtruth segmentation for OGC-DRSV and downsample the point clouds:

python data_prepare/ogcdrsv/collect_segm.py --src_root ${OGC_DR} --dest_root ${OGC_DRSV}

(3) KITTI-SF (Scene Flow)

Please first download:

Merge the training folder of them in your ${KITTI_SF} path. Then run the following script to unproject disparity, optical flow, 2D segmentation into point cloud, scene flow, 3D segmentation:

python data_prepare/kittisf/process_kittisf.py ${KITTI_SF}

Finally, downsample all point clouds to 8192-point:

python data_prepare/kittisf/downsample_kittisf.py ${KITTI_SF} --save_root ${KITTI_SF}_downsampled
# After extracting flow estimations in the following, come back here to downsample flow estimations
python data_prepare/kittisf/downsample_kittisf.py ${KITTI_SF} --save_root ${KITTI_SF}_downsampled --predflow_path flowstep3d

${KITTI_SF}_downsampled will be the path for the downsampled dataset.

(4) KITTI-Det (Detection)

Please first download the following items from KITTI 3D Object Detection Evaluation 2017:

Merge the training folder of them in your ${KITTI_DET} path. Then run the following script to extract 8192-point front-view point cloud, and obtain segmentation from bounding box annotations.

python data_prepare/kittidet/process_kittidet.py ${KITTI_DET}

(5) SemanticKITTI

Please first download the following iterms from SemanticKITTI:

Merge the velodyne, labels and calib.txt of each sequence. The organized dataset should be as follows:

SemanticKITTI
└── sequences
    └── 00
    │   ├── velodyne
    │   ├── labels
    │   └── calib.txt
    └── 01
    ...

Then run the following script to extract 8192-point front-view point cloud, and obtain segmentation from panoptic annotations.

python data_prepare/semantickitti/process_semantickitti.py ${SEMANTIC_KITTI}

3. Pre-trained models

You can download all our pre-trained models from Dropbox (including self-supervised scene flow networks, and unsupervised/supervised segmentation networks) and extract them to ./ckpt.

4. Scene flow estimation

Train

Train the self-supervised scene flow networks:

# SAPIEN 
python train_flow.py config/flow/sapien/sapien_unsup.yaml
# OGC-DR 
python train_flow.py config/flow/ogcdr/ogcdr_unsup.yaml
# OGC-DRSV 
python train_flow.py config/flow/ogcdrsv/ogcdrsv_unsup.yaml

For KITTI-SF dataset, we directly employ the pre-trained model released by FlowStep3D.

Test

Evaluate and save the scene flow estimations.

# SAPIEN 
python test_flow.py config/flow/sapien/sapien_unsup.yaml --split ${SPLIT} --save
# OGC-DR 
python test_flow.py config/flow/ogcdr/ogcdr_unsup.yaml --split ${SPLIT} --test_batch_size 12 --test_model_iters 5 --save
# OGC-DRSV 
python test_flow.py config/flow/ogcdrsv/ogcdrsv_unsup.yaml --split ${SPLIT} --test_batch_size 12 --test_model_iters 5 --save
# KITTI-SF 
python test_flow_kittisf.py config/flow/kittisf/kittisf_unsup.yaml --split ${SPLIT} --test_model_iters 5 --save

${SPLIT} can be train/val/test for SAPIEN & OGC-DR/OGC-DRSV, train/val for KITTI-SF.

5. Unsupervised segmentation

Train

Alternate the segmentation network training and scene flow improvement for R rounds. In each ${ROUND} (starting from 1):

# SAPIEN: first R-1 rounds
python train_seg.py config/seg/sapien/sapien_unsup_woinv.yaml --round ${ROUND}
python oa_icp.py config/seg/sapien/sapien_unsup_woinv.yaml --split ${SPLIT} --round ${ROUND} --save
# SAPIEN: the last round
python train_seg.py config/seg/sapien/sapien_unsup.yaml --round ${ROUND}

# OGC-DR: first R-1 rounds
python train_seg.py config/seg/ogcdr/ogcdr_unsup_woinv.yaml --round ${ROUND}
python oa_icp.py config/seg/ogcdr/ogcdr_unsup_woinv.yaml --split ${SPLIT} --round ${ROUND} --test_batch_size 24 --save
# OGC-DR: the last round
python train_seg.py config/seg/ogcdr/ogcdr_unsup.yaml --round ${ROUND}

# OGC-DRSV: first R-1 rounds
python train_seg.py config/seg/ogcdrsv/ogcdrsv_unsup_woinv.yaml --round ${ROUND}
python oa_icp.py config/seg/ogcdrsv/ogcdrsv_unsup_woinv.yaml --split ${SPLIT} --round ${ROUND} --test_batch_size 24 --save
# OGC-DRSV: the last round
python train_seg.py config/seg/ogcdrsv/ogcdrsv_unsup.yaml --round ${ROUND}

# KITTI-SF: first R-1 rounds
python train_seg.py config/seg/kittisf/kittisf_unsup_woinv.yaml --round ${ROUND}
python oa_icp.py config/seg/kittisf/kittisf_unsup_woinv.yaml --split ${SPLIT} --round ${ROUND} --test_batch_size 4 --save
# KITTI-SF: the last round
python train_seg.py config/seg/kittisf/kittisf_unsup.yaml --round ${ROUND}

When performing scene flow improvement, ${SPLIT} needs to traverse train/val/test for SAPIEN & OGC-DR/OGC-DRSV, train/val for KITTI-SF.

Test

# SAPIEN 
python test_seg.py config/seg/sapien/sapien_unsup.yaml --split test --round ${ROUND}
# OGC-DR 
python test_seg.py config/seg/ogcdr/ogcdr_unsup.yaml --split test --round ${ROUND} --test_batch_size 16
# OGC-DRSV 
python test_seg.py config/seg/ogcdrsv/ogcdrsv_unsup.yaml --split test --round ${ROUND} --test_batch_size 16
# KITTI-SF 
python test_seg.py config/seg/kittisf/kittisf_unsup.yaml --split val --round ${ROUND} --test_batch_size 8
# KITTI-Det 
python test_seg.py config/seg/kittidet/kittisf_unsup.yaml --split val --round ${ROUND} --test_batch_size 8
# SemanticKITTI 
python test_seg.py config/seg/semantickitti/kittisf_unsup.yaml --round ${ROUND} --test_batch_size 8

${ROUND} can be 1/2/3/..., and we take 2 rounds as default in our experiments. Specify --save to save the estimations. Specify --visualize for qualitative evaluation mode.

Test the scene flow improvement

Your can follow the evaluation settings of FlowStep3D to test the improved flow, and see how our method push the boundaries of unsupervised scene flow estimation:

# Refine the scene flow estimations
python oa_icp.py config/seg/kittisf/kittisf_unsup.yaml --split train --round 2 --test_batch_size 4 --save --saveflow_path flowstep3d_for-benchmark
python oa_icp.py config/seg/kittisf/kittisf_unsup.yaml --split val --round 2 --test_batch_size 4 --save --saveflow_path flowstep3d_for-benchmark
# Evaluate
python test_flow_kittisf_benchmark.py config/flow/kittisf/kittisf_unsup.yaml

6. Supervised segmentation

You can train the segmentation network with full annotations.

Train

# SAPIEN 
python train_seg_sup.py config/seg/sapien/sapien_sup.yaml
# OGC-DR
python train_seg_sup.py config/seg/ogcdr/ogcdr_sup.yaml
# OGC-DRSV
python train_seg_sup.py config/seg/ogcdrsv/ogcdrsv_sup.yaml
# KITTI-SF 
python train_seg_sup.py config/seg/kittisf/kittisf_sup.yaml
# KITTI-Det 
python train_seg_sup.py config/seg/kittidet/kittidet_sup.yaml

Test

# SAPIEN 
python test_seg.py config/seg/sapien/sapien_sup.yaml --split test
# OGC-DR
python test_seg.py config/seg/ogcdr/ogcdr_sup.yaml --split test --test_batch_size 16
# OGC-DRSV
python test_seg.py config/seg/ogcdrsv/ogcdrsv_sup.yaml --split test --test_batch_size 16
# KITTI-SF 
python test_seg.py config/seg/kittisf/kittisf_sup.yaml --split val --test_batch_size 8
# KITTI-Det 
python test_seg.py config/seg/kittidet/kittisf_sup.yaml --split val --test_batch_size 8
python test_seg.py config/seg/kittidet/kittidet_sup.yaml --split val --test_batch_size 8
# SemanticKITTI 
python test_seg.py config/seg/semantickitti/kittisf_sup.yaml --test_batch_size 8

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{song2022,
  title={{OGC: Unsupervised 3D Object Segmentation from Rigid Dynamics of Point Clouds}},
  author={Song, Ziyang and Yang, Bo},
  booktitle={NeurIPS},
  year={2022}
}

Acknowledgements

Some code is borrowed from:

You might also like...

[NeurIPS 2022] "Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative" by Tianxin Wei, Yuning You, Tianlong Chen, Yang Shen, Jingrui He, Zhangyang Wang

[NeurIPS 2022]

HyperGCL This is the repo for our NeurIPS 2022 paper based on Pytorch. Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative by

Jan 4, 2023

Implementation of ''GAGA: Deciphering Age-path of Generalized Self-paced Regularizer'' (NeurIPS 2022)

Generalized Age-path Algorithm (GAGA) Implementation of ''GAGA: Deciphering Age-path of Generalized Self-paced Regularizer'' (NeurIPS 2022) TL;DR: Tra

Sep 22, 2022

Code for NeurIPS 2022 paper "Robust offline Reinforcement Learning via Conservative Smoothing"

Code for NeurIPS 2022 paper

RORL: Robust Offline Reinforcement Learning via Conservative Smoothing Code for NeurIPS 2022 paper "Robust offline Reinforcement Learning via Conserva

Jan 3, 2023

[NeurIPS 2022] A Unified Model for Multi-class Anomaly Detection

[NeurIPS 2022] A Unified Model for Multi-class Anomaly Detection

UniAD Official PyTorch Implementation of A Unified Model for Multi-class Anomaly Detection, Accepted by NeurIPS 2022. 1. Quick Start 1.1 MVTec-AD Crea

Jan 2, 2023

A Neural Operator-based Integrated Photonic Device Simulation Framework, NeurOLight NeurIPS 2022

A Neural Operator-based Integrated Photonic Device Simulation Framework, NeurOLight NeurIPS 2022

NeurOLight By Jiaqi Gu, Zhengqi Gao, Chenghao Feng, Hanqing Zhu, Ray T. Chen, Duane S. Boning and David Z. Pan. This repo is the official implementati

Dec 7, 2022

[NeurIPS 2022] The official implementation for "Learning to Discover and Detect Objects"

Learning to Discover and Detect Objects This repository provides the official implementation of the following paper: Learning to Discover and Detect O

Dec 23, 2022

Zero-Shot 3D Drug Design by Sketching and Generating (NeurIPS 2022)

Zero-Shot 3D Drug Design by Sketching and Generating (NeurIPS 2022)

DESERT Zero-Shot 3D Drug Design by Sketching and Generating (NeurIPS 2022) P.s. Because the project is too tied to ByteDance infrastructure, we can no

Dec 17, 2022

official repository for the NeurIPS 2022 paper "Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks"

Decription The code is the official implementation of NeurIPS paper Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Qu

Dec 21, 2022

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022) PyTorch implementation for the state-of-art t

Jan 2, 2023
Comments
  • geometric constraints

    geometric constraints

    i view your video , it seems that your model often divides an object into two, or more, so the geometric constraints or loss designed is not perfect? image

    opened by Alexanderisgod 2
  • KITTI SF: Downsample missing files

    KITTI SF: Downsample missing files

    Congratulations to your great work and thank you very much for releasing the code.

    I am trying to run your project as described on the KITTI data (SF, Object, Semantic), but I am facing some trouble with the downsample_kitti.py script:

    1. There are "too many values to unpack" in this line, I fixed it by using pcs, segms, flows, _ = dataset[sid]: https://github.com/vLAR-group/OGC/blob/ab7a5badb95238f38dbf7b47bb29b238bcc1ec2c/data_prepare/kittisf/downsample_kittisf.py#L44 This will create the files in data up to kitti_sf_downsampled/data/000099, which brings me to the 2nd problem:

    2. When I then try and run train_seg.py config/seg/kittisf/kittisf_unsup_woinv.yaml --round 1 the first training epoch seems to run fine, but then I get an error - kitti_sf_downsampled/data/000100/pc1.npy can not be found.

    Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/lhome/baurst/anaconda3/envs/OGC/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/lhome/baurst/anaconda3/envs/OGC/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/lhome/baurst/anaconda3/envs/OGC/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/lhome/baurst/workspace/liflow2/OGC/datasets/dataset_kittisf.py", line 91, in getitem pcs, segms, flows = self._load_data(idx, view_sel) File "/lhome/baurst/workspace/liflow2/OGC/datasets/dataset_kittisf.py", line 67, in _load_data pc1, pc2 = np.load(osp.join(data_path, 'pc%d.npy'%(view_id1 + 1))), np.load(osp.join(data_path, 'pc%d.npy'%(view_id2 + 1))) File "/lhome/baurst/anaconda3/envs/OGC/lib/python3.8/site-packages/numpy/lib/npyio.py", line 390, in load fid = stack.enter_context(open(os_fspath(file), "rb")) FileNotFoundError: [Errno 2] No such file or directory: '/path/to/kitti_sf_downsampled/data/000100/pc1.npy' File "/lhome/baurst/workspace/liflow2/OGC/train_seg.py", line 97, in eval_epoch for i, batch in tbar: File "/lhome/baurst/workspace/liflow2/OGC/train_seg.py", line 192, in train val_loss, val_avg, ap_eval_meter = self.eval_epoch(test_loader) File "/lhome/baurst/workspace/liflow2/OGC/train_seg.py", line 350, in trainer.train(args.epochs, train_set, train_loader, val_loader)

    Thanks in advance for taking a look & thank you for your help! Best regards.

    opened by baurst 2
  • Fix: Create log_dir if it does not exist, enabling writing of tensorboard logs

    Fix: Create log_dir if it does not exist, enabling writing of tensorboard logs

    For me, the experiments did not create any tensorboard files. This is due to the fact that the log_dir does not exist. This PR will enable the creation of tensorboard logs by creating the log_dir first.

    opened by baurst 0
  • Questions about correct order and paths when running the full pipeline

    Questions about correct order and paths when running the full pipeline

    Hi,

    thank you for publishing the code to your very interesting paper!

    Could you please kindly look at my steps that I did to try to reproduce the results in the paper? Clearly I must be doing something wrong, but I cannot figure it out, since there are a lot of steps involved. Thank you very much in advance for taking a look. Your help is very much appreciated!

    Here is how I adapted the experiment (mainly data and save paths) to my machine:

    • config/flow/kittisf/kittisf_unsup.yaml

      • old save_path: 'ckpt/flow/kittisf/kittisf_unsup/epoch=23.ckpt'
      • new save_path: '/mnt/ssd4/ogc/ckpt/flow/kittisf/kittisf_unsup/epoch=23.ckpt'
      • old root: '/home/ziyang/Desktop/Datasets/KITTI_SceneFlow'
      • new root: '/mnt/ssd4/ogc/kitti_sf'
    • config/seg/kittidet/kittisf_unsup.yaml

      • old save_path: 'ckpt/seg/kittisf/kittisf_unsup'
      • new save_path: '/mnt/ssd4/ogc/ckpt/seg/kittisf/kittisf_unsup'
      • old root: '/home/ziyang/Desktop/Datasets/KITTI_Object'
      • new root: '/mnt/ssd4/ogc/kitti_det'
    • config/seg/kittisf/kittisf_sup.yaml

      • old save_path: 'ckpt/seg/kittisf/kittisf_sup'
      • new save_path: '/mnt/ssd4/ogc/ckpt_out/seg/kittisf/kittisf_sup'
      • old root: '/home/ziyang/Desktop/Datasets/KITTI_SceneFlow_downsampled'
      • new root: '/mnt/ssd4/ogc/kitti_sf_downsampled'
    • config/seg/kittisf/kittisf_unsup.yaml

      • old save_path: 'ckpt/seg/kittisf/kittisf_unsup'
      • new save_path: '/mnt/ssd4/ogc/ckpt/seg/kittisf/kittisf_unsup'
      • old root: '/home/ziyang/Desktop/Datasets/KITTI_SceneFlow_downsampled'
      • new root: '/mnt/ssd4/ogc/kitti_sf_downsampled
      • old batch_size: 4
      • new batch_size: 2
    • config/seg/kittisf/kittisf_unsup_woinv.yaml

      • old save_path: 'ckpt/seg/kittisf/kittisf_unsup_woinv'
      • new save_path: '/mnt/ssd4/ogc/ckpt_out/seg/kittisf/kittisf_unsup_woinv'
      • old root: '/home/ziyang/Desktop/Datasets/KITTI_SceneFlow_downsampled'
      • new root: '/mnt/ssd4/ogc/kitti_sf_downsampled'
      • old batch_size: 4
      • new batch_size: 2
    • config/seg/semantickitti/kittisf_unsup.yaml

      • old save_path: 'ckpt/seg/kittisf/kittisf_unsup'
      • new save_path: '/mnt/ssd4/ogc/ckpt/seg/kittisf/kittisf_unsup'
      • old root: '/home/ziyang/Desktop/Datasets/SemanticKITTI'
      • new root: '/mnt/ssd4/ogc/SemanticKITTI'

    After this, I did the following steps:

    KITTI_SF="/mnt/ssd4/ogc/kitti_sf"
    KITTI_DET="/mnt/ssd4/ogc/kitti_det"
    SEMANTIC_KITTI="/mnt/ssd4/ogc/SemanticKITTI"
    
    python data_prepare/kittisf/process_kittisf.py ${KITTI_SF}
    
    python test_flow_kittisf.py config/flow/kittisf/kittisf_unsup.yaml --split train --test_model_iters 5 --save
    python test_flow_kittisf.py config/flow/kittisf/kittisf_unsup.yaml --split val --test_model_iters 5 --save
    
    python data_prepare/kittisf/downsample_kittisf.py ${KITTI_SF} --save_root ${KITTI_SF}_downsampled
    python data_prepare/kittisf/downsample_kittisf.py ${KITTI_SF} --save_root ${KITTI_SF}_downsampled --predflow_path flowstep3d
    
    python data_prepare/kittidet/process_kittidet.py ${KITTI_DET}
    python data_prepare/semantickitti/process_semantickitti.py ${SEMANTIC_KITTI}
    
    for ROUND in $(seq 1 2); do
        python train_seg.py config/seg/kittisf/kittisf_unsup_woinv.yaml --round ${ROUND}
        python oa_icp.py config/seg/kittisf/kittisf_unsup_woinv.yaml --split train --round ${ROUND} --test_batch_size 2 --save
        python oa_icp.py config/seg/kittisf/kittisf_unsup_woinv.yaml --split val --round ${ROUND} --test_batch_size 2 --save
    done
    
    python train_seg.py config/seg/kittisf/kittisf_unsup.yaml --round ${ROUND}
    
    # KITTI-SF
    python test_seg.py config/seg/kittisf/kittisf_unsup.yaml --split val --round ${ROUND} --test_batch_size 2
    

    For the last command I am getting: [email protected]: 0.3241964006222572 [email protected]: 0.2567730165763252 [email protected]: 0.35737439222042144 [email protected]: 0.26614363307181654 [email protected]: 0.5437731196054254 {'per_scan_iou_avg': 0.5634193836152553, 'per_scan_iou_std': 0.020407961700111627, 'per_scan_ri_avg': 0.6674587628245354, 'per_scan_ri_std': 0.00429959088563919}

    # KITTI-Det
    python test_seg.py config/seg/kittidet/kittisf_unsup.yaml --split val --round ${ROUND} --test_batch_size 2
    

    I am getting: [email protected]: 0.13945170257439435 [email protected]: 0.1318724309223011 [email protected]: 0.19702186647587533 [email protected]: 0.13796774698606545 [email protected]: 0.3444609491048393 {'per_scan_iou_avg': 0.45250289306404357, 'per_scan_iou_std': 0.0, 'per_scan_ri_avg': 0.4861106249785733, 'per_scan_ri_ std': 0.0}

    # SemanticKITTI
    python test_seg.py config/seg/semantickitti/kittisf_unsup.yaml --round ${ROUND} --test_batch_size 2
    

    [email protected]: 0.10315215577576131 [email protected]: 0.0989709766834506 [email protected]: 0.15591615175838772 [email protected]: 0.10372148859543817 [email protected]: 0.31385 31283601174 {'per_scan_iou_avg': 0.4351089967498311, 'per_scan_iou_std': 0.0, 'per_scan_ri_avg': 0.4129963953279687, 'per_scan_ri_s td': 0.0}

    Am I doing something fundamentally wrong? Thanks again for taking a look!

    opened by baurst 5
Owner
vLAR
Visual Learning and Reasoning Group, HK PolyU
vLAR
Official Pytorch implementations for "SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation" (NeurIPS 2022)

SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation (NeurIPS 2022) The repository contains official Pytorch implementations o

null 561 Dec 30, 2022
Official PyTorch implementation of Scalable Neural Video Representations with Learnable Positional Features (NeurIPS 2022).

Scalable Neural Video Representations with Learnable Positional Features (NVP) Official PyTorch implementation of "Scalable Neural Video Representatio

Subin Kim 44 Dec 16, 2022
PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.

Dataset Factorization This is the pytorch implementation of the following NeurIPS 2022 paper: Dataset Distillation via Factorization Songhua Liu, Kai

null 27 Jan 3, 2023
An official implementation of "Decomposed Knowledge Distillation for Class-incremental Semantic Segmentation" (NeurIPS 2022) in PyTorch.

Decomposed Knowledge Distillation for Class-Incremental Semantic Segmentation This is an official implementation of the paper "Decomposed Knowledge Di

CV Lab @ Yonsei University 34 Dec 14, 2022
Code for Dataset and Benchmarks Submission, Neurips 2022

Towards Understanding How Machines Can Learn Causal Overhypotheses (Submission: NeurIPS Baselines 2022) This repository hosts the code for the blicket

Canny Lab @ The University of California, Berkeley 11 Nov 16, 2022
Source code for Neural Information Processing Systems (NeurIPS) 2022 paper "Stochastic Multiple Target Sampling Gradient Descent"

Stochastic Multiple Target Sampling Gradient Descent This repository contains the Pytorch implementation of Stochastic Multiple Target Sampling Gradie

Viet Hoang 10 Oct 29, 2022
Code for the NeurIPS 2022 paper "Generative Visual Prompt: Unifying Distributional Control of Pre-Trained Generative Models"

Generative Visual Prompt: Unifying Distributional Control of Pre-Trained Generative Models Official PyTorch implementation of our NeurIPS 2022 paper G

Chen Wu (吴尘) 86 Dec 25, 2022
[NeurIPS 2022] Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings

Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings Overview This repo covers the implementation VisualCSE and AudioCSE in the

Yiren Jian 16 Dec 4, 2022
Code for paper titled "Learning Latent Seasonal-Trend Representations for Time Series Forecasting" in NeurIPS 2022

LaST: Learning Latent Seasonal-Trend Representations for Time Series Forecasting In this repository, we provide source code of LaST framework for repr

Zhiyuan 16 Dec 9, 2022
[NeurIPS 2022] The official repository of Expression Learning with Identity Matching for Facial Expression Recognition

ELIM_FER Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition (NeurIPS 2022) Daeha Kim, Byung Cheol Song CVI

Daeha Kim 17 Dec 15, 2022