82 results found Sort:
- Filter by Primary Language:
- Python (66)
- Jupyter Notebook (10)
- +
《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
Created
2019-07-02
584 commits to master branch, last one 2 days ago
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
Created
2020-12-07
158 commits to main branch, last one about a year ago
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Created
2021-09-15
1,600 commits to main branch, last one 4 months ago
Graph Attention Networks (https://arxiv.org/abs/1710.10903)
Created
2018-02-01
21 commits to master branch, last one 3 years ago
Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
Created
2018-03-02
36 commits to master branch, last one 3 years ago
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy...
Created
2020-12-26
110 commits to main branch, last one 3 years ago
Datasets, tools, and benchmarks for representation learning of code.
This repository has been archived
(exclude archived)
Created
2019-02-28
286 commits to master branch, last one 2 years ago
The implementation of DeBERTa
Created
2020-06-08
55 commits to master branch, last one about a year ago
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).
Created
2018-11-26
71 commits to master branch, last one 3 years ago
Recent Transformer-based CV and related works.
Created
2021-02-11
828 commits to main branch, last one about a year ago
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Created
2021-01-31
72 commits to main branch, last one 3 years ago
list of efficient attention modules
This repository has been archived
(exclude archived)
Created
2020-07-31
46 commits to master branch, last one 3 years ago
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
Created
2018-10-23
101 commits to master branch, last one 6 years ago
Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone
Created
2024-06-10
40 commits to main branch, last one about a month ago
Text classification using deep learning models in Pytorch
Created
2018-06-21
22 commits to master branch, last one 6 years ago
(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
Created
2021-03-09
129 commits to transformer branch, last one 2 years ago
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
Created
2023-05-19
275 commits to main branch, last one 6 months ago
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Created
2018-11-22
23 commits to master branch, last one about a year ago
Universal Graph Transformer Self-Attention Networks (TheWebConf WWW 2022) (Pytorch and Tensorflow)
Created
2019-08-03
181 commits to master branch, last one 2 years ago
Official PyTorch implementation of Fully Attentional Networks
Created
2022-04-20
18 commits to master branch, last one 2 years ago
Keras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP
Created
2017-03-29
105 commits to master branch, last one 4 years ago
[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers
Created
2022-06-18
113 commits to main branch, last one 11 months ago
DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image
Created
2020-11-05
131 commits to master branch, last one 8 months ago
The official PyTorch implementation of the paper "SAITS: Self-Attention-based Imputation for Time Series". A fast and state-of-the-art (SOTA) deep-learning neural network model for efficient time-seri...
impute
pytorch
attention
imputation
time-series
transformer
deep-learning
interpolation
missing-values
self-attention
incomplete-data
imputation-model
machine-learning
irregular-sampling
partially-observed
attention-mechanism
incomplete-time-series
time-series-imputation
partially-observed-data
partially-observed-time-series
Created
2021-12-07
84 commits to main branch, last one 3 months ago
[NeurIPS'22] Tokenized Graph Transformer (TokenGT), in PyTorch
Created
2022-06-29
32 commits to main branch, last one about a year ago
[NeurIPS 2021 Spotlight] & [IJCV 2024] SOFT: Softmax-free Transformer with Linear Complexity
Created
2021-09-11
72 commits to master branch, last one 8 months ago
Representation learning on dynamic graphs using self-attention networks
Created
2019-07-21
3 commits to master branch, last one 4 years ago
[MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models
Created
2021-12-13
210 commits to main branch, last one 7 days ago
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Created
2021-03-31
37 commits to main branch, last one 2 years ago
Awesome Transformers (self-attention) in Computer Vision
Created
2020-10-18
33 commits to main branch, last one 3 years ago