谷歌浏览器插件
订阅小程序
在清言上使用

ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer Acceleration

CoRR(2024)

引用 0|浏览8
暂无评分
关键词
Vision Transformer,Neural Network,Deep Neural Network,ImageNet,Matrix Multiplication,Sparse Matrix,Accuracy Loss,Memory Usage,Negligible Loss,Sparsity Level,Throughput Improvement,Suitable Configuration,Transformer Block,Computational Cost,Convolutional Neural Network,Weight Matrix,Dense Network,Search Space,Linear Mode,Sparse Representation,Sparse Set,Sparse Weight,Reduction In Computational Cost,Linear Projection,Hardware Accelerators,Deep Neural Network Model,Multilayer Perceptron Layer,Compression Ratio,Evolutionary Search,Shared Weights
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要