This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Overview

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots

Blind2Unblind

Citing Blind2Unblind

@inproceedings{wang2022blind2unblind,
  title={Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots}, 
  author={Zejin Wang and Jiazheng Liu and Guoqing Li and Hua Han},
  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Installation

The model is built in Python3.8.5, PyTorch 1.7.1 in Ubuntu 18.04 environment.

Data Preparation

1. Prepare Training Dataset

  • For processing ImageNet Validation, please run the command

    python ./dataset_tool.py
  • For processing SIDD Medium Dataset in raw-RGB, please run the command

    python ./dataset_tool_raw.py

2. Prepare Validation Dataset

‚Äč Please put your dataset under the path: ./Blind2Unblind/data/validation.

Pretrained Models

Download pre-trained models: Google Drive

The pre-trained models are placed in the folder: ./Blind2Unblind/pretrained_models

# # For synthetic denoising
# gauss25
./pretrained_models/g25_112f20_beta19.7.pth
# gauss5_50
./pretrained_models/g5-50_112rf20_beta19.4.pth
# poisson30
./pretrained_models/p30_112f20_beta19.1.pth
# poisson5_50
./pretrained_models/p5-50_112rf20_beta20.pth

# # For raw-RGB denoising
./pretrained_models/rawRGB_112rf20_beta19.4.pth

# # For fluorescence microscopy denooising
# Confocal_FISH
./pretrained_models/Confocal_FISH_112rf20_beta20.pth
# Confocal_MICE
./pretrained_models/Confocal_MICE_112rf20_beta19.7.pth
# TwoPhoton_MICE
./pretrained_models/TwoPhoton_MICE_112rf20_beta20.pth

Train

  • Train on synthetic dataset
python train_b2u.py --noisetype gauss25 --data_dir ./data/train/Imagenet_val --val_dirs ./data/validation --save_model_path ../experiments/results --log_name b2u_unet_gauss25_112rf20 --Lambda1 1.0 --Lambda2 2.0 --increase_ratio 20.0
  • Train on SIDD raw-RGB Medium dataset
python train_sidd_b2u.py --data_dir ./data/train/SIDD_Medium_Raw_noisy_sub512 --val_dirs ./data/validation --save_model_path ../experiments/results --log_name b2u_unet_raw_112rf20 --Lambda1 1.0 --Lambda2 2.0 --increase_ratio 20.0
  • Train on FMDD dataset
python train_fmdd_b2u.py --data_dir ./dataset/fmdd_sub/train --val_dirs ./dataset/fmdd_sub/validation --subfold Confocal_FISH --save_model_path ../experiments/fmdd --log_name Confocal_FISH_b2u_unet_fmdd_112rf20 --Lambda1 1.0 --Lambda2 2.0 --increase_ratio 20.0

Test

  • Test on Kodak, BSD300 and Set14

    • For noisetype: gauss25

      python test_b2u.py --noisetype gauss25 --checkpoint ./pretrained_models/g25_112f20_beta19.7.pth --test_dirs ./data/validation --save_test_path ./test --log_name b2u_unet_g25_112rf20 --beta 19.7
    • For noisetype: gauss5_50

      python test_b2u.py --noisetype gauss5_50 --checkpoint ./pretrained_models/g5-50_112rf20_beta19.4.pth --test_dirs ./data/validation --save_test_path ./test --log_name b2u_unet_g5_50_112rf20 --beta 19.4
    • For noisetype: poisson30

      python test_b2u.py --noisetype poisson30 --checkpoint ./pretrained_models/p30_112f20_beta19.1.pth --test_dirs ./data/validation --save_test_path ./test --log_name b2u_unet_p30_112rf20 --beta 19.1
    • For noisetype: poisson5_50

      python test_b2u.py --noisetype poisson5_50 --checkpoint ./pretrained_models/p5-50_112rf20_beta20.pth --test_dirs ./data/validation --save_test_path ./test --log_name b2u_unet_p5_50_112rf20 --beta 20.0
  • Test on SIDD Validation in raw-RGB space

python test_sidd_b2u.py --checkpoint ./pretrained_models/rawRGB_112rf20_beta19.4.pth --test_dirs ./data/validation --save_test_path ./test --log_name validation_b2u_unet_raw_112rf20 --beta 19.4
  • Test on SIDD Benchmark in raw-RGB space
python benchmark_sidd_b2u.py --checkpoint ./pretrained_models/rawRGB_112rf20_beta19.4.pth --test_dirs ./data/validation --save_test_path ./test --log_name benchmark_b2u_unet_raw_112rf20 --beta 19.4
  • Test on FMDD Validation

    • For Confocal_FISH
    python test_fmdd_b2u.py --checkpoint ./pretrained_models/Confocal_FISH_112rf20_beta20.pth --test_dirs ./dataset/fmdd_sub/validation --subfold Confocal_FISH --save_test_path ./test --log_name Confocal_FISH_b2u_unet_fmdd_112rf20 --beta 20.0
    • For Confocal_MICE
    python test_fmdd_b2u.py --checkpoint ./pretrained_models/Confocal_MICE_112rf20_beta19.7.pth --test_dirs ./dataset/fmdd_sub/validation --subfold Confocal_MICE --save_test_path ./test --log_name Confocal_MICE_b2u_unet_fmdd_112rf20 --beta 19.7
    • For TwoPhoton_MICE
    python test_fmdd_b2u.py --checkpoint ./pretrained_models/TwoPhoton_MICE_112rf20_beta20.pth --test_dirs ./dataset/fmdd_sub/validation --subfold TwoPhoton_MICE --save_test_path ./test --log_name TwoPhoton_MICE_b2u_unet_fmdd_112rf20 --beta 20.0
You might also like...

The source code for the paper: Context-Aware Video Reconstruction for Rolling Shutter Cameras (CVPR2022)

The source code for the paper: Context-Aware Video Reconstruction for Rolling Shutter Cameras (CVPR2022)

Context-Aware Video Reconstruction for Rolling Shutter Cameras This repository contains the source code for the paper: Context-Aware Video Reconstruct

Oct 31, 2022

Code for CVPR2022 paper: Instance Segmentation with Mask-supervised Polygonal Boundary Transformers

Code for CVPR2022 paper: Instance Segmentation with Mask-supervised Polygonal Boundary Transformers

Instance Segmentation With Mask-Supervised Polygonal Boundary Transformers From Justin Lazarow (UCSD, now at Apple), Weijian Xu (UCSD, now at Microsof

Oct 31, 2022

Pytorch Implementation of Automatic Relation-aware Graph Network Proliferation (CVPR2022 Oral)

Pytorch Implementation of Automatic Relation-aware Graph Network Proliferation (CVPR2022 Oral)

Automatic Relation-aware Graph Network Proliferation Contributions We devise a RELATION-AWARE GRAPH SEARCH SPACE that comprises both node and relation

Oct 4, 2022

FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset (CVPR2022)

FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset (CVPR2022)

FaceVerse FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset Lizhen Wang, Zhiyuan Chen, Tao Yu, Chenguang

Nov 23, 2022

MUSIC-AVQA, CVPR2022 (ORAL)

MUSIC-AVQA, CVPR2022 (ORAL)

Audio-Visual Question Answering (AVQA) PyTorch code accompanies our CVPR 2022 paper: Learning to Answer Questions in Dynamic Audio-Visual Scenarios (O

Nov 26, 2022

(CVPR2022) Reflash Dropout in Image Super-Resolution

Reflash-Dropout-in-Image-Super-Resolution (CVPR2022) Reflash Dropout in Image Super-Resolution Paper link: https://arxiv.org/pdf/2112.12089.pdf Depend

Nov 16, 2022

Tree Energy Loss: Towards Sparsely Annotated Semantic Segmentation. CVPR2022.

Tree Energy Loss: Towards Sparsely Annotated Semantic Segmentation. CVPR2022.

Tree Energy Loss: Towards Sparsely Annotated Semantic Segmentation Introduction This repository is an official implementation of the CVPR 2022 paper T

Nov 23, 2022

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022)

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022)

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022) By Shilong Zhang*, Zhuoran Yu*, Liyang Liu*, Xinjiang Wang, Aojun Zhou,

Nov 27, 2022
Comments
  • How to prepare ValidationNoisyBlocksRaw.mat for SIDD RAW training?

    How to prepare ValidationNoisyBlocksRaw.mat for SIDD RAW training?

    Thank you for the great work but can you please provide some code/pointer on how to prepare ValidationNoisyBlocksRaw.mat which is required for validation for SIDD RAW denoising?

    opened by wind-surfer 1
  • Pretrained Models

    Pretrained Models

    This work is amazing and I can't wait to use it, but I haven't had time to train the model lately, so I was wondering if you could release your pre-trained model?

    opened by Volodymyr233 1
Owner
demonsjin
Ph.D Candidate
demonsjin
Official implementation of CVPR2022 paper "Capturing Humans in Motion: Temporal-Attentive 3D Human Pose and Shape Estimation from Monocular Video"

Capturing Humans in Motion: Temporal-Attentive 3D Human Pose and Shape Estimation from Monocular Video [CVPR 2022] Our Motion Pose and Shape Network (

lilijinjin 38 Nov 15, 2022
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

?? Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) ?? If DaGAN is helpful in your photos/projects, please hel

Fa-Ting Hong 468 Nov 24, 2022
The official code for BSTRO in paper: Capturing and Inferring Dense Full-Body Human-Scene Contact, CVPR2022

BSTRO: Body-Scene contact TRansfOrmer This is the code repository for Capturing and Inferring Dense Full-BodyHuman-Scene Contact. Body-Scene contact T

Paul Huang 66 Oct 26, 2022
(CVPR2022) Official PyTorch Implementation of KDEP. Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability

Knowledge Distillation as Efficient Pretraining: Faster Convergence, Higher Data-efficiency, and Better Transferability This repository contains the c

CVMI Lab 53 Nov 17, 2022
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flow-Guided Video

Media Computing Group @ Nankai University 502 Nov 18, 2022
Official repository of ACmix (CVPR2022)

ACmix This repo contains the official PyTorch code and pre-trained models for ACmix. On the Integration of Self-Attention and Convolution Update 2022.

null 218 Nov 20, 2022
Official PyTorch codes of CVPR2022 Oral: Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization

EFDM The official codes of our CVPR2022 paper: Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization One Sentence

Yabin Zhang 103 Nov 25, 2022
The official repository for [CVPR2022] MOVER: Human-Aware Object Placement for Visual Environment Reconstruction.

Human-Aware Object Placement for Visual Environment Reconstruction. (CVPR2022) [Project Page] [Paper] [MPI Project Page] [Youtube Video] 3D Scene and

Hongwei Yi 71 Oct 3, 2022
[CVPR2022] This repository contains code for the paper "Nested Collaborative Learning for Long-Tailed Visual Recognition", published at CVPR 2022

Nested Collaborative Learning for Long-Tailed Visual Recognition This repository is the official PyTorch implementation of the paper in CVPR 2022: Nes

Jun Li 62 Nov 4, 2022
Paper 'Reduce Information Loss in Transformers for Pluralistic Image Inpainting' in CVPR2022

Reduce Information Loss in Transformers for Pluralistic Image Inpainting Overview Some results Results on FFHQ Results on Places2 Results on ImageNet

Qiankun Liu 87 Nov 20, 2022