Published January 1, 2024 | Version v1
Conference paper Open

HaLViT: Half of the Weights are Enough

  • 1. Istanbul Tech Univ, Dept Artificial Intelligence & Data Engn, Signal Proc Computat Intelligence Res Grp SP4CING, Inst Informat, Istanbul, Turkiye

Description

Deep learning architectures like Transformers and Convolutional Neural Networks (CNNs) have led to ground-breaking advances across numerous fields. However, their extensive need for parameters poses challenges for implementation in environments with limited resources. In our research, we propose a strategy that focuses on the utilization of the column and row spaces of weight matrices, significantly reducing the number of required model parameters without substantially affecting performance. This technique is applied to both Bottleneck and Attention layers, achieving a notable reduction in parameters with minimal impact on model efficacy. Our proposed model, HaLViT, exemplifies a parameter-efficient Vision Transformer. Through rigorous experiments on the ImageNet dataset and COCO dataset, HaLViT's performance validates the effectiveness of our method, offering results comparable to those of conventional models.

Files

bib-91ff081a-ff5e-45b6-b6f0-03950a9e83e1.txt

Files (158 Bytes)

Name Size Download all
md5:01f89c6b230f5be46aa6b668dab3d598
158 Bytes Preview Download