[CVPR 2022--Oral] Restormer: Efficient Transformer for High-Resolution Image Restoration. SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.
-
Updated
Aug 16, 2024 - Python
[CVPR 2022--Oral] Restormer: Efficient Transformer for High-Resolution Image Restoration. SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.
Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"
Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).
[CVPR 2023] IMP: iterative matching and pose estimation with transformer-based recurrent module
[MICCAI 2023] DAE-Former: Dual Attention-guided Efficient Transformer for Medical Image Segmentation
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"
Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"
[ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Liu, Zhangyang Wang
Master thesis with code investigating methods for incorporating long-context reasoning in low-resource languages, without the need to pre-train from scratch. We investigated if multilingual models could inherit these properties by making it an Efficient Transformer (s.a. the Longformer architecture).
[ICCV 2023] Efficient Video Action Detection with Token Dropout and Context Refinement
Official Implementation of Energy Transformer in PyTorch for Mask Image Reconstruction
This repository contains the official code for Energy Transformer---an efficient Energy-based Transformer variant for graph classification
A custom Tensorflow implementation of Google's Electra NLP model with compositional embeddings using complementary partitions
Demo code for CVPR2023 paper "Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers"
This is the source code of article how to create a chatbot in python . i.e A chatbot using the Reformer, also known as the efficient Transformer, to generate dialogues between two bots.
Nonparametric Modern Hopfield Models
MetaFormer-Based Global Contexts-Aware Network for Efficient Semantic Segmentation (Accepted by WACV 2024)
Gated Attention Unit (TensorFlow implementation)
Add a description, image, and links to the efficient-transformers topic page so that developers can more easily learn about it.
To associate your repository with the efficient-transformers topic, visit your repo's landing page and select "manage topics."
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: