A collection of transformer models, in PyTorch.

Overview

A-Transformers

A collection of transformer models, in PyTorch.

pip install a-transformers

PyPI - Python Version

Usage

Transformer

from a_transformers.transformers import Transformer

transformer = Transformer(
    features=768,
    max_length=256,
    num_layers=12,
    head_features=64,
    num_heads=12,
    multiplier=4
)

x = torch.randn(2, 12, 768)
y = transformer(x) # [2, 12, 768]

Resampler

from a_transformers.transformers import Resampler

resampler = Resampler(
    features=768,
    in_tokens=12,
    out_tokens=4,
    num_layers=12,
    head_features=64,
    num_heads=12,
    multiplier=4
)

x = torch.randn(2, 12, 768)
y = resampler(x) # [2, 4, 768]

RQ-Transformer

from a_transformers.rq_transformer import RQTransformer

num_residuals = 4
codebook_size = 2048

rqtransformer = RQTransformer(
    features=768,
    max_length=64,
    max_residuals=num_residuals,
    num_tokens=codebook_size,
    num_layers=8,
    head_features=64,
    num_heads=8,
    multiplier=4,
    shared_codebook=False
)

# Training
x = torch.randint(0, 2048, (1, 64, num_residuals)) # [b, t, r]
loss = rqtransformer(x) # tensor(9.399146, grad_fn=<NllLoss2DBackward0>)

# Genration
sequence = rqtransformer.generate(x, sequence_length=64) # [1, 64, 4]
You might also like...

A collection of the pytorch implementation of neural bandit algorithm includes neuralUCB(Neural Contextual Bandits with UCB-based Exploration) and neuralTS(Neural Thompson sampling)

A collection of the pytorch implementation of neural bandit algorithm includes neuralUCB(Neural Contextual Bandits with UCB-based Exploration) and neuralTS(Neural Thompson sampling)

Neural-Bandit Algorithms (NeuralUCB and NeuralTS) A collection of the pytorch implementation of neural bandit algorithm includes neuralUCB Neural Cont

Jun 30, 2022

Pytorch implementation of "Coarse-to-Fine Vision Transformer"

Coarse-to-Fine Vision Transformer This a Pytorch implementation of our paper "Coarse-to-Fine Vision Transformer". Pre-trained Models Backbone # of Coa

Nov 24, 2022

This is the official Pytorch implementation of "Affine Medical Image Registration with Coarse-to-Fine Vision Transformer" (CVPR 2022), written by Tony C. W. Mok and Albert C. S. Chung.

This is the official Pytorch implementation of

Affine Medical Image Registration with Coarse-to-Fine Vision Transformer (C2FViT) This is the official Pytorch implementation of "Affine Medical Image

Nov 14, 2022

Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)

Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)

LiLT (ACL 2022) This is the official PyTorch implementation of the ACL 2022 paper: "LiLT: A Simple yet Effective Language-Independent Layout Transform

Nov 26, 2022

[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Nov 21, 2022

PyTorch reimplementation of the paper "MaxViT: Multi-Axis Vision Transformer" [arXiv 2022].

PyTorch reimplementation of the paper

MaxViT: Multi-Axis Vision Transformer Unofficial PyTorch reimplementation of the paper MaxViT: Multi-Axis Vision Transformer by Zhengzhong Tu et al. (

Nov 17, 2022

Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Implementation of Deformable Attention in Pytorch from the paper

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Nov 23, 2022

This project compares the performance of Swin-Transformer v2 implemented in JAX and PyTorch.

This project compares the performance of Swin-Transformer v2 implemented in JAX and PyTorch.

JAX implementation of Swin-Transformer v2 Introduction This project compared the performance (training/validation speed and accuracy for sanity checki

Oct 5, 2022

Official PyTorch implementation of the “A Unified Transformer Framework for Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection”.

Official PyTorch implementation of the “A Unified Transformer Framework for Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection”.

A Unified Transformer Framework for Group-based Segmentation: Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection [arxiv] UFO is

Nov 17, 2022
Releases(v0.0.1)
Owner
archinet
Open Source AI Research Group
archinet
A collection of Prometheus/PCP data visualization images utilizing Grafana and a collection of bundled dashboards

LiveMetricVisualizer A tool available to visualize Prometheus and PCP data during live collection Build locally with podman build -t <name> -f Dockerf

Distributed System Analysis 1 Aug 10, 2022
Statistical mechanics models such as random cluster models, random growth models and related processes.

StatisticalMechanicsModels Figure: Geodesic crossing. Computed using the code found in this repository (statistical_mechanical_models) Statistical mec

David Michael Harper 7 Sep 2, 2022
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"

FLASH - Pytorch Implementation of the Transformer variant proposed in the paper Transformer Quality in Linear Time Install $ pip install FLASH-pytorch

Phil Wang 201 Nov 23, 2022
This is an official implementation for "ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias", "ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond".

ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond Updates | Introduction | Statement | Current applica

null 167 Nov 24, 2022
Generates combinations of 3D Models, batch scripts, and collection databases for use in 3DS Max

JB's 3DS Max Item Generator Summary This application is meant to be used as logistical support tool when creating a large collection of varied 3D obje

Jake Brunner 0 Aug 24, 2022
A collection of high-quality models for the MuJoCo physics engine, curated by DeepMind.

Menagerie is a collection of high-quality models for the MuJoCo physics engine, curated by DeepMind. A physics simulator is only as good as the model

DeepMind 397 Nov 17, 2022
Uni-Core is built for rapidly creating PyTorch models with high performance, especially for Transfromer-based models

Uni-Core, an efficient distributed PyTorch framework Uni-Core is built for rapidly creating PyTorch models with high performance, especially for Trans

DP Technology 36 Nov 23, 2022
The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.

LM-Debugger is an open-source interactive tool for inspection and intervention in transformer-based language models. This repository includes the code

Mor Geva 105 Nov 18, 2022
Transformer Models for Long-Term Series Forecasting implemented by High-Flyer AI

Former Models for Long-Term Series Forecasting (LTSF) 简体中文 | English 本项目在幻方萤火超算集群上用 PyTorch 实现了 Informer 和 Autoformer 两个模型的分布式训练版本,它们是近年来采用 transforme

null 6 Aug 30, 2022
SOTA Google's Perceiver-AR Music Transformer Implementations and Models

Perceiver Music Transformer SOTA Google's Perceiver-AR Music Transformer Implementations and Models Multi-Instrumental Version Solo Piano Version Usef

Alex 12 Nov 7, 2022