Stars
342
Forks
32
Language
Python
Last Updated
May 01, 2024
Similar Repos
Repo | Language | Stars | Description | Updated At |
---|---|---|---|---|
Python | 16 | A PyTorch implementation of the Compact Multi-Head Self-Attention Mechanism from the paper: "Low Rank Factorization … | Aug 11, 2022 | |
Python | 2 | Visualize DETR Multi-Head Self-Attention weight | May 08, 2023 | |
Python | 11 | Neural News Recommendation with Multi-Head Self-Attention using BERT | Sep 12, 2022 | |
Python | 15 | Source code of the paper "Attention as Relation: Learning Supervised Multi-head Self-Attention for Relation Extraction, … | Dec 22, 2021 | |
Python | 6 | Multi-Head Attention, Transformer, Perceiver, Linear Attention. | Oct 19, 2023 | |
Python | 110 | Multi-head attention in PyTorch | Nov 30, 2022 | |
Python | 183 | Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient … | Jul 26, 2022 | |
Python | 4 | A pytorch version handwritten recognition with CNN and Multi head self attention | Jan 11, 2022 | |
Python | 2 | ResNet for License Plate Detection & Multi-Head Self Attention Transformer for OCR | Jun 12, 2023 | |
Python | 20 | The PyTorch implementation of ReCoSa(the Relevant Contexts with Self-attention) for dialogue generation using the multi-head … | Mar 14, 2022 | |
Python | 85 | Implementation of Nyström Self-attention, from the paper Nyströmformer | Aug 04, 2022 | |
C++ | 809 | Fast and memory-efficient exact attention | Aug 13, 2022 | |
C++ | 7 | Fast and memory-efficient exact attention | Jan 29, 2023 | |
None | 3 | Fast and memory-efficient exact attention | Jul 28, 2022 | |
None | 2 | Fast and memory-efficient exact attention | Apr 28, 2023 | |
None | 2 | Fast and memory-efficient exact attention | Apr 19, 2023 | |
C++ | 3 | Fast and memory-efficient exact attention | Sep 20, 2023 | |
C++ | 2 | Fast and memory-efficient exact attention | Jun 28, 2023 | |
C++ | 4 | Fast and memory-efficient exact attention | Jun 26, 2023 | |
Python | 3 | codes for paper: What Dense Graph do You Need for Self-attention? | May 30, 2022 | |
TypeScript | 5 | A generalized multi-key memoization solution that does not leak memory. | Feb 15, 2023 | |
Python | 44 | A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework". | Jun 29, 2022 | |
Python | 6 | Code for ACL 2022 findings paper "Gaussian Multi-head Attention for Simultaneous Machine Translation" | Apr 14, 2023 | |
Python | 8 | MULTI HEAD SELF-ATTENTION BASED SPATIAL-TEMPORAL INFORMATION GRAPH CONVOLUTIONAL NETWORKS FOR TRAFFIC FLOW FORECASTING | May 18, 2022 | |
Python | 2 | Implementation of FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | Oct 07, 2023 | |
Python | 3 | PyTorch implementation of "Distinguishing Homophenes using Multi-Head Visual-Audio Memory" (AAAI2022) | Jun 10, 2022 | |
Python | 135 | Code for Multi-Head Attention: Collaborate Instead of Concatenate | Apr 29, 2023 | |
Python | 2 | A PyTorch implementation of MCA based on PRICAI 2022 paper "Weakly-supervised Temporal Action Localization with … | Jul 29, 2023 | |
Python | 94 | TensorFlow implementation for paper Time Interval Aware Self-Attention for Sequential Recommendation. | Apr 23, 2023 | |
Jupyter Notebook | 19 | Pytorch implementation of the paper Stand-Alone Self-Attention in Vision Models | Mar 18, 2023 | |
HTML | 4 | Not a head pat we need, but a head pat we deserved. | Apr 16, 2023 | |
Python | 42 | Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a … | Jul 26, 2022 | |
Python | 290 | Code for the Eager Translation Model from the paper You May Not Need Attention | Mar 30, 2023 | |
Jupyter Notebook | 3 | Multi-Head Bi-Directional Attention Flow For Machine Reading Comprehension | Aug 07, 2022 | |
Python | 4 | Repository for the Multi-Head Multi-Loss Model Calibration paper | Feb 28, 2024 | |
Python | 16 | Pytorch implementation of Self-Attention ConvLSTM | May 08, 2023 | |
Python | 5 | TensorFlow implementation of Self Attention GAN | Nov 15, 2020 | |
Python | 3 | A sliding window based multi-head attention using longformer cuda kernels | Oct 22, 2022 | |
None | 5 | Nested Deformable Multi-head Attention for Facial Image Inpainting [WACV-23] | Apr 26, 2023 | |
None | 2 | Simple Paper Implementation Code about all model after Attention is all you need(Transformer) | Dec 02, 2020 | |
Python | 31 | Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences" | May 26, 2022 | |
Python | 153 | Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch | Apr 30, 2023 | |
Python | 2 | Implementing Attention Is All You Need paper. Transformer Model | Nov 19, 2021 | |
Python | 2 | PyTorch Implementation of Self-Attention GAN (SAGAN) | Oct 27, 2023 | |
Python | 1436 | A memory-efficient implementation of DenseNets | Oct 18, 2022 | |
Java | 2 | A UiAutomator on android, does not need root access | Jul 09, 2022 | |
Java | 2 | A UiAutomator on android, does not need root access | Sep 22, 2022 | |
Kotlin | 2 | Multi-platform efficient paper size calculator app | Dec 16, 2022 | |
Python | 14 | The implementation of the paper "Improving Sample Quality of Diffusion Models Using Self-Attention Guidance". | Oct 12, 2022 | |
None | 2 | Implementation of the paper "Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism" | Oct 06, 2022 |