[Preprint'22] Tokenized Graph Transformer (TokenGT), in PyTorch

Related tags

Admin Panels tokengt
Overview

Tokenized Graph Transformer - Official PyTorch Implementation

Pure Transformers are Powerful Graph Learners
Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong
arXiv Preprint

image-tokengt

Setting up experiments

Using the provided Docker image (recommended)

docker pull jw9730/tokengt:latest
docker run -it --gpus=all --ipc=host --name=tokengt -v /home:/home jw9730/tokengt:latest bash
# upon completion, you should be at /tokengt inside the container

Using the provided Dockerfile

git clone --recursive https://github.com/jw9730/tokengt.git /tokengt
cd tokengt
docker build --no-cache --tag tokengt:latest .
docker run -it --gpus all --ipc=host --name=tokengt -v /home:/home tokengt:latest bash
# upon completion, you should be at /tokengt inside the container

Using pip

sudo apt-get update
sudo apt-get install python3.9
git clone --recursive https://github.com/jw9730/tokengt.git tokengt
cd tokengt
bash install.sh

Running experiments

PCQM4Mv2 large-scale graph regression

cd large-scale-regression/scripts

# TokenGT (ORF)
bash pcqv2-orf.sh

# TokenGT (Lap)
bash pcqv2-lap.sh

# TokenGT (Lap) + Performer
bash pcqv2-lap-performer-finetune.sh

# TokenGT (ablated)
bash pcqv2-ablated.sh

# Attention distance plot for TokenGT (ORF)
bash visualize-pcqv2-orf.sh

# Attention distance plot for TokenGT (Lap)
bash visualize-pcqv2-lap.sh

Pre-Trained Models

We provide checkpoints of TokenGT (ORF) and TokenGT (Lap), both trained with PCQM4Mv2. Please download ckpts.zip from this link. Then, unzip ckpts and place it in the large-scale-regression/scripts directory, so that each trained checkpoint is located at large-scale-regression/scripts/ckpts/pcqv2-tokengt-[NODE_IDENTIFIER]-trained/checkpoint_best.pt. After that, you can resume the training from these checkpoints by adding the option --pretrained-model-name pcqv2-tokengt-[NODE_IDENTIFIER]-trained to the training scripts.

References

Our implementation uses code from the following repositories:

Citation

If you find our work useful, please consider citing it:

@article{kim2021transformers,
  author    = {Jinwoo Kim and Tien Dat Nguyen and Seonwoo Min and Sungjun Cho and Moontae Lee and Honglak Lee and Seunghoon Hong},
  title     = {Pure Transformers are Powerful Graph Learners},
  journal   = {arXiv},
  volume    = {abs/2207.02505},
  year      = {2022},
  url       = {https://arxiv.org/abs/2207.02505}
}
You might also like...

Pytorch implementation of "Coarse-to-Fine Vision Transformer"

Coarse-to-Fine Vision Transformer This a Pytorch implementation of our paper "Coarse-to-Fine Vision Transformer". Pre-trained Models Backbone # of Coa

Sep 20, 2022

This is the official Pytorch implementation of "Affine Medical Image Registration with Coarse-to-Fine Vision Transformer" (CVPR 2022), written by Tony C. W. Mok and Albert C. S. Chung.

This is the official Pytorch implementation of

Affine Medical Image Registration with Coarse-to-Fine Vision Transformer (C2FViT) This is the official Pytorch implementation of "Affine Medical Image

Sep 16, 2022

Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)

Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)

LiLT (ACL 2022) This is the official PyTorch implementation of the ACL 2022 paper: "LiLT: A Simple yet Effective Language-Independent Layout Transform

Sep 29, 2022

[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Sep 22, 2022

PyTorch reimplementation of the paper "MaxViT: Multi-Axis Vision Transformer" [arXiv 2022].

PyTorch reimplementation of the paper

MaxViT: Multi-Axis Vision Transformer Unofficial PyTorch reimplementation of the paper MaxViT: Multi-Axis Vision Transformer by Zhengzhong Tu et al. (

Sep 25, 2022

Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Implementation of Deformable Attention in Pytorch from the paper

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Sep 20, 2022

This project compares the performance of Swin-Transformer v2 implemented in JAX and PyTorch.

This project compares the performance of Swin-Transformer v2 implemented in JAX and PyTorch.

JAX implementation of Swin-Transformer v2 Introduction This project compared the performance (training/validation speed and accuracy for sanity checki

Sep 16, 2022

Official PyTorch implementation of the “A Unified Transformer Framework for Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection”.

Official PyTorch implementation of the “A Unified Transformer Framework for Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection”.

A Unified Transformer Framework for Group-based Segmentation: Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection [arxiv] UFO is

Sep 8, 2022

Pytorch implementation of paper "DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation", ECCV 2022.

Pytorch implementation of paper

DynaST This is the pytorch implementation of the following ECCV 2022 paper: DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation So

Sep 26, 2022
Comments
  • Fairesq advantage

    Fairesq advantage

    Hello,

    Can I know what the advantage of using Fairseq is? I mean if we constructed the dataset using the PyTorch geometric constructor, added your wrapper for the eig_values calculation, and then pushed all of them to a regular simple Transformer (using our own working implementation for example), will this work? (Let's forget about the usage of performer and its dependencies for now).

    Thanks!

    opened by linaashaji 0
Owner
Jinwoo Kim
CS Ph.D. student at KAIST, sets and (hyper)graphs, geometric DL, attention.
Jinwoo Kim
Unofficial PyTorch implementation of the paper Transformer-based Dual Relation Graph for Multi-label Image Recognition. ICCV 2021

TDRG Unofficial PyTorch implementation of the paper Transformer-based Dual Relation Graph for Multi-label Image Recognition. ICCV 2021 Architecture Th

Leilei Ma 1 Sep 29, 2022
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"

FLASH - Pytorch Implementation of the Transformer variant proposed in the paper Transformer Quality in Linear Time Install $ pip install FLASH-pytorch

Phil Wang 172 Sep 27, 2022
This is an official implementation for "ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias", "ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond".

ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond Updates | Introduction | Statement | Current applica

null 80 Sep 17, 2022
Recipe for a General, Powerful, Scalable Graph Transformer

GraphGPS: General Powerful Scalable Graph Transformers How to build a graph Transformer? We provide a 3-part recipe on how to build graph Transformers

Ladislav Rampášek 155 Sep 23, 2022
VoViT: Low Latency Graph-based Audio-Visual VoiceSeparation Transformer

VoViT: Low Latency Graph-based Audio-Visual VoiceSeparation Transformer Project Page Arxiv Paper Paper Under review. Code will be uploaded upon paper

Juan F. Montesinos 23 Sep 26, 2022
Video Graph Transformer for Video Question Answering (ECCV'22)

VGT This is the pytorch implementation of our paper accepted to ECCV'22: Video Graph Transformer for Video Question Answering Environment Assume you h

Sea AI Lab 12 Sep 23, 2022
Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion"

MKGFormer Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion" Model Architecture Illu

ZJUNLP 45 Sep 14, 2022
This is a way to make custom graph on a specific crypto in a currency, for a period with a specific interval. The graph will be automatically send on a discord channel.

discord-bot-cryptocurrency With the use of a discord bot, it will create and send custom graph of : a crypto, in a currency, for a period, with an int

null 2 Jun 12, 2022
✈Graph Playground : the playground of Graph Neural Networks

Graph Playground (v-0.1) Graph Playground is a highy emsamble project for Graph Neural Network (GNN), using pytorch framwork and other supporting pack

Larry 14 Aug 4, 2022
Graph R-CNN: Towards Accurate 3D Object Detection with Semantic-Decorated Local Graph (ECCV 2022, Oral)

Graph R-CNN: Towards Accurate 3D Object Detection with Semantic-Decorated Local Graph (ECCV 2022, Oral) Citation If you find this project useful in yo

Honghui Yang 66 Sep 29, 2022