Sparse Attention

Fibottention: Inceptive Visual Representation Learning with Diverse Attention Across Heads

Visual perception tasks are predominantly solved by Vision Transformer (ViT) architectures, which, despite their effectiveness, encounter a computational bottleneck due to the quadratic complexity of computing self-attention. This inefficiency is …