VIS Lab
VIS Lab
People
Publications
Contact
Light
Dark
Automatic
Neural network architecture
Skip-Attention: Improving Vision Transformers by Paying Less Attention
This work aims to improve the efficiency of vision transformers (ViT). While ViTs use computationally expensive self-attention operations in every layer, we identify that these operations are highly correlated across layers – a key redundancy that …
Cite
×