This repo provides the source code for "Cross-Domain Adaptive Teacher for Object Detection".

Overview

Cross-Domain Adaptive Teacher for Object Detection

License: CC BY-NC 4.0

License: CC BY-NC 4.0

This is the PyTorch implementation of our paper:
Cross-Domain Adaptive Teacher for Object Detection
Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen Wu, Zijian He, Kris Kitani, Peter Vajda
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022

[Paper] [Project]

Installation

Prerequisites

  • Python ≥ 3.6
  • PyTorch ≥ 1.5 and torchvision that matches the PyTorch installation.
  • Detectron2 == 0.3 (The version I used to run my code)

Our tested environment

  • 8 V100 (16 batch size)
  • 4 2080 Ti (4 batch size)

Install python env

To install required dependencies on the virtual environment of the python (e.g., virtualenv for python3), please run the following command at the root of this code:

$ python3 -m venv /path/to/new/virtual/environment/.
$ source /path/to/new/virtual/environment/bin/activate

For example:

$ mkdir python_env
$ python3 -m venv python_env/
$ source python_env/bin/activate

Build Detectron2 from Source

Follow the INSTALL.md to install Detectron2.

Dataset download

  1. Download the datasets

  2. Organize the dataset as the Cityscapes and PASCAL VOC format following:

adaptive_teacher/
└── datasets/
    └── cityscapes/
        ├── gtFine/
            ├── train/
            └── test/
            └── val/
        ├── leftImg8bit/
            ├── train/
            └── test/
            └── val/
   └── cityscapes_foggy/
        ├── gtFine/
            ├── train/
            └── test/
            └── val/
        ├── leftImg8bit/
            ├── train/
            └── test/
            └── val/
   └── VOC2012/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
   └── clipark/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
   └── watercolor/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
    

Training

  • Train the Adaptive Teacher under PASCAL VOC (source) and Clipart1k (target)
python train_net.py \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml\
      OUTPUT_DIR output/exp_clipart
  • Train the Adaptive Teacher under cityscapes (source) and foggy cityscapes (target)
python train_net.py\
      --num-gpus 8\
      --config configs/faster_rcnn_VGG_cross_city.yaml\
      OUTPUT_DIR output/exp_city

Resume the training

python train_net.py \
      --resume \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml MODEL.WEIGHTS <your weight>.pth

Evaluation

python train_net.py \
      --eval-only \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml \
      MODEL.WEIGHTS <your weight>.pth

Results and Model Weights

If you are urgent with the pre-trained weights, please download our interal prod_weights here at the Link. Please note that the key name in the pre-trained model is slightly different and you will need to align manually. Otherwise, please wait and we will try to release the local weights in the future.

Real to Artistic Adaptation:

Backbone Source set (labeled) Target set (unlabeled) Batch size [email protected] Model Weights Comment
R101 VOC12 Clipark1k 16 labeled + 16 unlabeled 40.1 link Ours w/o discriminator (dis=0)
R101 VOC12 Clipark1k 4 labeled + 4 unlabeled 47.2 link lr=0.01, dis_w=0.1, default
R101 VOC12 Clipark1k 16 labeled + 16 unlabeled 49.6 link Ours in the paper, unsup_w=0.5
R101+FPN VOC12 Clipark1k 16 labeled + 16 unlabeled 51.2 link (coming soon) For future work

Weather Adaptation:

Backbone Source set (labeled) Target set (unlabeled) Batch size [email protected] Model Weights Comment
VGG16 Cityscapes Foggy Cityscapes (ALL) 16 labeled + 16 unlabeled 48.7 link (coming soon) Ours w/o discriminator
VGG16 Cityscapes Foggy Cityscapes (ALL) 16 labeled + 16 unlabeled 50.9 link (coming soon) Ours in the paper
VGG16 Cityscapes Foggy Cityscapes (0.02) 16 labeled + 16 unlabeled in progress link (coming soon) Ours in the paper
VGG16+FPN Cityscapes Foggy Cityscapes (ALL) 16 labeled + 16 unlabeled 57.4 link (coming soon) For future work

Citation

If you use Adaptive Teacher in your research or wish to refer to the results published in the paper, please use the following BibTeX entry.

@inproceedings{li2022cross,
    title={Cross-Domain Adaptive Teacher for Object Detection},
    author={Li, Yu-Jhe and Dai, Xiaoliang and Ma, Chih-Yao and Liu, Yen-Cheng and Chen, Kan and Wu, Bichen and He, Zijian and Kitani, Kris and Vajda, Peter},
    booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2022}
} 

Also, if you use Detectron2 in your research, please use the following BibTeX entry.

@misc{wu2019detectron2,
  author =       {Yuxin Wu and Alexander Kirillov and Francisco Massa and
                  Wan-Yen Lo and Ross Girshick},
  title =        {Detectron2},
  howpublished = {\url{https://github.com/facebookresearch/detectron2}},
  year =         {2019}
}

License

This project is licensed under CC-BY-NC 4.0 License, as found in the LICENSE file.

Comments
  • Distributed training failure

    Distributed training failure

    Hi,

    When running the training code, I encountered the following issue.

    Exception during training: Traceback (most recent call last): File "/research/cbim/vast/tl601/projects/adaptive_teacher/adapteacher/engine/trainer.py", line 402, in train_loop self.run_step_full_semisup() File "/research/cbim/vast/tl601/projects/adaptive_teacher/adapteacher/engine/trainer.py", line 597, in run_step_full_semisup all_label_data, branch="supervised" File "/research/cbim/vast/tl601/anaconda3/envs/adapteacher/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/research/cbim/vast/tl601/anaconda3/envs/adapteacher/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 787, in forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 1: 66 67 68 69 70 71 72 73 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error

    Then I added find_unused_parameters=True to DistributedDataParallel() function. And the problem has been solved.

    But now I have another issue.

    Exception during training: Traceback (most recent call last): File "/research/cbim/vast/tl601/projects/adaptive_teacher/adapteacher/engine/trainer.py", line 403, in train_loop self.run_step_full_semisup() File "/research/cbim/vast/tl601/projects/adaptive_teacher/adapteacher/engine/trainer.py", line 657, in run_step_full_semisup losses.backward() File "/research/cbim/vast/tl601/anaconda3/envs/adapteacher/lib/python3.7/site-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/research/cbim/vast/tl601/anaconda3/envs/adapteacher/lib/python3.7/site-packages/torch/autograd/init.py", line 149, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 65 with name roi_heads.box_predictor.bbox_pred.bias has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.

    The answer from online suggests setting find_unused_parameters=False. But this will cause the previous error.

    I was wondering if you have a better solution.

    My environment: detectron v0.5 pytorch1.9.0 cuda 11.1

    Thanks

    opened by litingfeng 9
  • AP NaN

    AP NaN

    Hello,

    I formed a new target dataset in Pascal VOC format and as I understand, the target dataset should be unlabeled so I did not add .xml files to the Annotations folder of the target dataset. But how does the evaluation for the unlabeled images work in the teacher model if there are no ground truth boxes?

    Specifically, at every EVAL_PERIOD iteration this line returns NaN: https://github.com/facebookresearch/adaptive_teacher/blob/d57d20640ae314a42c43dd82b1c1e26e90fa4b95/adapteacher/evaluation/pascal_voc_evaluation.py#L305

    What should be done instead? Thanks!

    opened by darkhan-s 4
  • some puzzles about using

    some puzzles about using "branch.startswith("supervised")" in adapteacher/modeling/meta_arch/rcnn.py

    Hi,I find you write "if branch.startswith("supervised")" in line 217 of adapteacher/modeling/meta_arch/rcnn.py. I am confused of it. I think it might be some problem when we run loss of unlabeled data with pseudo label ( in line 605 of adapteacher/engine/trainer.py), which should run in branch "supervised_target". And I think this will result wrong label for loss_D_img_s_pesudo. Please check it.

    opened by gedebachichi 3
  • possible to wrap the teacher model by DistributedDataParallel?

    possible to wrap the teacher model by DistributedDataParallel?

    Hello, I'm trying to use your idea in my thesis work, thanks for your great idea and code! I set require_grad=False for all the parameters in the teacher model, and wrapped it in DistributedDataParallel. But what I got with my own code is that the training stucks at loss.backward(), the losses are not NaN. If I lower down the batch size and run with just 1 GPU, the code works fine. But if I use DistributedDataParallel, the training will stuck immediately.

    Would you have an idea about it? Is it because the exponential moving average somehow affects the computation graph? Thanks

    opened by Weijiang-Xiong 2
  • Evaluation for each category

    Evaluation for each category

    Hi authors,

    Great work! I tried to reproduce the adaptive teacher, but I find the script used to evaluate is only for coco style metrics. Do you have the script to output per category AP so that we can compare with the results in the paper?

    Thanks!

    opened by helq2612 2
  • Why discriminator is trained in supervised and target branch?

    Why discriminator is trained in supervised and target branch?

    I noticed that in the rcnn.py loss_D_img_s and loss_D_img_t are trained with a small weight. I don't know what is the meaning of these two lines of code?

    Is this the way to initialize the discriminator? Will it prevent the model suffer from Model Collapse, which is caused by the discriminator?

    losses["loss_D_img_s"] = loss_D_img_s*0.001 losses["loss_D_img_t"] = loss_D_img_t*0.001

    Will the performance of the model be affected if the two lines of code above are removed and the model just be trained with the following two lines of code in the domain branch?

    losses["loss_D_img_s"] = loss_D_img_s losses["loss_D_img_t"] = loss_D_img_t

    opened by Pandaxia8 2
  • Regarding the Watercolor Dataset Config

    Regarding the Watercolor Dataset Config

    Hello, so I am wondering whether there is a specific split file we have to run to obtain the voc split for the watercolor dataset? The burn in training for watercolor only trains on the 7 overlapping classes that span both voc and watercolor. Is that part of the code missing or is it assumed to be done by ourselves.? Many Thanks.

    opened by michaelku1 2
  • Question about Figure 4

    Question about Figure 4

    Thank you for your work. But I have a question about Figure 4 in the main paper.

    It seems that with only 10k iterations of the source-only pre-training, the model has achieved around 33.0 mAP, which has significantly outperformed the well-trained source-only results (28.8). Does it mean that the detectron2 implemented FRCNN works better?

    opened by tmp12316 2
  • loss nan

    loss nan

    When I set Dis_loss_weight=0.1, model will collapse. I see the same problem in https://github.com/facebookresearch/detectron2/issues/1128 . According to your solution, setting the smaller diss_weight will alleviate this issue.But it will get a poor MAP. How did you train your model with Dis_loss_weight=0.1?

    [05/30 11:21:11] d2.utils.events INFO: eta: 8:40:21 iter: 9999 total_loss: nan loss_cls: nan loss_box_reg: nan loss_rpn_cls: 0.4926 loss_rpn_loc: 0.2313 loss_D_img_s: nan loss_D_img_t: nan time: 0.6949 data_time: 0.0415 lr: 0.01 max_mem: 5007M [05/30 11:21:25] d2.utils.events INFO: eta: 8:39:59 iter: 10019 total_loss: nan loss_cls: nan loss_box_reg: nan loss_rpn_cls: 0.4825 loss_rpn_loc: 0.2396 loss_D_img_s: nan loss_D_img_t: nan time: 0.6948 data_time: 0.0418 lr: 0.01 max_mem: 5007M [05/30 11:21:38] d2.utils.events INFO: eta: 8:39:40 iter: 10039 total_loss: nan loss_cls: nan loss_box_reg: nan loss_rpn_cls: 0.4763 loss_rpn_loc: 0.2427 loss_D_img_s: nan loss_D_img_t: nan time: 0.6948 data_time: 0.0349 lr: 0.01 max_mem: 5007M [05/30 11:21:52] d2.utils.events INFO: eta: 8:38:53 iter: 10059 total_loss: nan loss_cls: nan loss_box_reg: nan loss_rpn_cls: 0.4791 loss_rpn_loc: 0.232 loss_D_img_s: nan loss_D_img_t: nan time: 0.6947 data_time: 0.0333 lr: 0.01 max_mem: 5007M [05/30 11:22:05] d2.utils.events INFO: eta: 8:38:33 iter: 10079 total_loss: nan loss_cls: nan loss_box_reg: nan loss_rpn_cls: 0.493 loss_rpn_loc: 0.2346 loss_D_img_s: nan loss_D_img_t: nan time: 0.6947 data_time: 0.0344 lr: 0.01 max_mem: 5007M

    opened by Pandaxia8 1
  • About VGG16 pre-trained on ImageNet

    About VGG16 pre-trained on ImageNet

    we found that in the paper Chapter 4.2 : "ResNet101 [13] or VGG16 [36] pre-trained on ImageNet [7]". However, at adaptive_teacher/configs/faster_rcnn_VGG_cross_city.yaml, VGG16 did not used the pre-train ImageNet parameters like adaptive_teacher/configs/faster_rcnn_R101_cross_water.yaml.

    We would like to know whether VGG16 are pretrained on ImageNet or not. thank you very much

    opened by pengjw23 1
  • Can not reproduce the results on

    Can not reproduce the results on "foggy cityscapes" due to Out of Memory Issue

    I am getting "Cannot allocate memory error" after around 13-15k iterations while trying to reproduce results on "foggy cityscapes" dataset. I running this code on 4 GPUs with 360G memory.

    I can reproduce VOC results on the same machine! The error is only on cityscapes dataset. I doubt the memory storage keeps increasing with iterations

    driver

    *** environment*** Python 3.7.10, torch=1.7.0, torchvision=0.8.1, detectron2=0.5

    cfg parameters used for my trail: MAX_ITER: 100000 IMG_PER_BATCH_LABEL: 8 IMG_PER_BATCH_UNLABEL: 8 BASE_LR: 0.04 BURN_UP_STEP: 20000 EVAL_PERIOD: 1000 NUM_WORKERS: 4

    **** Error**** ImportError: /scratch/1/ace14705nl/adaptive_teacher/.venv/lib/python3.7/site-packages/PIL/_imaging.cpython-37m-x86_64-linux-gnu.so: failed to map segment from shared object: Cannot allocate memory


    UPDATE: when I tried this experiment on other GPU cluster; 4 GPUs V100NVLINK 256G I could run this code for 28K and get [email protected] around 46 but again the process gets terminated due to memory issue.

    "iterations =>> PBS: job killed: mem 269239236kb exceeded limit 268435456kb"

    I cant figure out why so much memory (269G) is required while running this code on cityscapes dataset. I will highly apricate any help. Thanks.

    opened by onkarkris 6
  • The AP and APl is 0.000 when evaluation in cityscapes_foggy_val

    The AP and APl is 0.000 when evaluation in cityscapes_foggy_val

    Ok, the model will be evaluated two times: If you are in the stage of burn-in, you will get 0 AP for teacher. bbox (not uses): image bbox_student: image

    Originally posted by @yujheli in https://github.com/facebookresearch/adaptive_teacher/issues/20#issuecomment-1179671818

    How to understant "the stage of burn-in"? It means the training epoches is not enough?

    opened by Doris1231 1
  • AttributeError: 'NoneType' object has no attribute 'keys'

    AttributeError: 'NoneType' object has no attribute 'keys'

    I was trying to reproduce the training from PASCAL VOC (source) to Clipart1k (target) using

    python train_net.py \
          --num-gpus 8 \
          --config configs/faster_rcnn_R101_cross_clipart.yaml\
          OUTPUT_DIR output/exp_clipart
    

    However, I got the following error message:

    Traceback (most recent call last):
      File ".../adaptive_teacher/train_net.py", line 73, in <module>
        launch(
      File ".../detectron2-0.3/detectron2/engine/launch.py", line 55, in launch
        mp.spawn(
      File ".../lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
        return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
      File ".../lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
        while not context.join():
      File ".../lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 160, in join
        raise ProcessRaisedException(msg, error_index, failed_process.pid)
    torch.multiprocessing.spawn.ProcessRaisedException:
    
    -- Process 4 terminated with the following error:
    Traceback (most recent call last):
      File ".../lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
        fn(i, *args)
      File ".../detectron2-0.3/detectron2/engine/launch.py", line 94, in _distributed_worker
        main_func(*args)
      File ".../adaptive_teacher/train_net.py", line 64, in main
        trainer.resume_or_load(resume=args.resume)
      File ".../adaptive_teacher/adapteacher/engine/trainer.py", line 337, in resume_or_load
        checkpoint = self.checkpointer.resume_or_load(
      File ".../lib/python3.10/site-packages/fvcore/common/checkpoint.py", line 229, in resume_or_load
        return self.load(path, checkpointables=[])
      File ".../lib/python3.10/site-packages/fvcore/common/checkpoint.py", line 156, in load
        incompatible = self._load_model(checkpoint)
      File ".../adaptive_teacher/adapteacher/checkpoint/detection_checkpoint.py", line 24, in _load_model
        incompatible = self._load_student_model(checkpoint)
      File ".../adaptive_teacher/adapteacher/checkpoint/detection_checkpoint.py", line 64, in _load_student_model
        self._convert_ndarray_to_tensor(checkpoint_state_dict)
      File ".../lib/python3.10/site-packages/fvcore/common/checkpoint.py", line 368, in _convert_ndarray_to_tensor
        for k in list(state_dict.keys()):
    AttributeError: 'NoneType' object has no attribute 'keys'
    

    I have pinpointed that the issue comes from detectron2.checkpoint.c2_model_loading.align_and_update_state_dicts removing all information in checkpoint["model"] because it is totally normal before entering this function but returns None after the function call in Line 17 of file .../adaptive_teacher/adapteacher/checkpoint/detection_checkpoint.py.

    Could you please confirm why this function returns None? I really appreciate your help!!

    As you might have noticed, I am using the following environment:

    • Python 3.10.4
    • torch==1.12.0+cu102 (& torchvision of the same version)
    • Detectron2==0.3
    • Latest adaptive_teacher (I just confirmed right before submitting this issue)
    • V100
    opened by zwang123 5
Owner
Meta Research
Meta Research
This repo provides a rudimentary implementation of Gaussian Velocity Field.

Gaussian Velocity Field (GVF) This repo provides a rudimentary implementation of Gaussian Velocity Field. How to run To look into the details of const

Chengyuan Zhang 3 Jul 28, 2022
This repository provides source code, trained neural network model and dataset for our NeuralPassthrough work that is published at SIGGRAPH 2022.

NeuralPassthrough Introduction This repository provides source code, trained neural network model and dataset for our NeuralPassthrough work that is p

Meta Research 53 Sep 19, 2022
A python package that provides Nigerian bank details(bank code, cbn code, name & ussd code)

?? nigeria_banks Nigeria Banks is a basic python package that returns details of particular bank in Nigeria. Installation You can install nigeria_bank

Devjoseph 4 Aug 6, 2022
Program to download public GitHub repo and store that repo to the local storage.

GitHub-Repo-Downloader github.com / patelka2211 / GitHub-Repo-Downloader Input Run DOWNLOAD_REQUIRED_MODULES.py file to download required modules. py

Kartavya Patel 2 Apr 28, 2022
ZarrDAP is a FastAPI project that provides access to Zarr and NetCDF data in remote object storage using the Open-source Project for a Network Data Access Protocol (OPeNDAP).

ZarrDAP OPeNDAP for Zarr! ZarrDAP is a FastAPI project that provides access to Zarr and NetCDF data in remote object storage using the Open-source Pro

null 21 Sep 20, 2022
Flexia (Flexible Artificial Intelligence) is an open-source library that provides a high-level API for developing accurate Deep Learning models for all kinds of Deep Learning tasks such as Classification, Regression, Object Detection, Image Segmentation, etc.

Flexia (Flexible Artificial Intelligence) is an open-source library that provides a high-level API for developing accurate Deep Learning models for all kinds of Deep Learning tasks such as Classification, Regression, Object Detection, Image Segmentation, etc.

Flexible Artificial Intelligence 5 Aug 16, 2022
An open-source Python3 package that provides Formula 1 data to developers.

F1PyStats F1PyStats is an open-source Python3 package that provides Formula 1 data/statistics to developers. This package obtains Formula 1 data via t

null 8 Sep 27, 2022
A Free Open Source Group Music Voice Chat Bot Telegram.Give 🌟 And Fork This Repo Before Use ☺️

Tamilini Music ?? Give your ?? Before clicking on deploy to heroku just click on fork and star just below A Telegram Bot to Play music ?? in Group Voi

TamilBots 32 Sep 2, 2022
This repository provides the code for training deep image prior networks for image denoising with pytorch.

Deep_Image_Prior_Pytorch This repository provides the code for training deep image prior networks for image denoising with pytorch. Deep Image Prior N

Dr. Sander Ali Khowaja 1 Aug 26, 2022
This repo presents you the official code of "VISTA: Boosting 3D Object Detection via Dual Cross-VIew SpaTial Attention"

VISTA VISTA: Boosting 3D Object Detection via Dual Cross-VIew SpaTial Attention Shengheng Deng, Zhihao Liang, Lin Sun and Kui Jia* (*) Corresponding a

null 90 Sep 26, 2022
Busted Code is a repo full of weird scripts in programming languages that you have always wanted to test.

Busted Code Busted Code is a repo full of weird scripts in programming languages that you have always wanted to test. Languages: Python Contribute YOU

null 1 Apr 25, 2022
This repo contains the code for the recipe of the winning entry to the Ego4d VQ2D challenge at CVPR 2022.

Improved Baseline for Visual Queries 2D Localization This repo holds the solution of our submission to the VQ2D task in Ego4D Challenge 2022. Introduc

Meta Research 19 Sep 22, 2022
Repo to store ETL-A code based on Shark Attacks Data from Kaggle.

Reproducible ETL-A This repo focuses on reproducible ETL pipelines and the application of reproducible analysis. To find out more my article is availa

Joe Lewis 1 Aug 13, 2022
This repo contains code for a three tier application consisting of web layer, app layer and data layer

Creating an infrastructure that is redundant across multiple availability zones, end users can only access the app through a domain name, the database should only be accessible from the application servers. a secure way to access the servers for troubleshooting must be provided.

DailyDevOps Projects 13 Jul 20, 2022
Code repo for KDD'22 paper : 'RES: A Robust Framework for Guiding Visual Explanation'

RES: A Robust Framework for Guiding Visual Explanation The Pytorch implementation of RES framework for KDD'22 paper: RES: A Robust Framework for Guidi

null 25 Sep 26, 2022
public repo for TANGO (Target Adaptive No-code neural network Generation and Operation framework)

TANGO Table of Contents Introduction to TANGO Source Tree Structure How to build images and run containers Docker and Docker-compose Installation TANG

null 3 Sep 26, 2022
Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (ATP).

Revisiting the "Video" in Video-Language Understanding Welcome to the official repo for our paper: Revisiting the "Video" in Video-Language Understand

Stanford Vision and Learning Lab 8 Sep 21, 2022
WIP: initial rendering code for Megascans. This repo will be updated later

Rendering Megascans You need to download an environment map (e.g., this one), and save it. Then put/path/to/my_environment_map.exr in the config. To r

Ivan Skorokhodov 2 Sep 27, 2022