谷歌浏览器插件
订阅小程序
在清言上使用

FAT: Frequency-Aware Transformation for Bridging Full-Precision and Low-Precision Deep Representations

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(2024)

引用 1|浏览45
暂无评分
关键词
Quantization (signal),Fats,Transforms,Training,Adaptation models,Standards,Frequency-domain analysis,Efficient neural network,model compression,quantization,representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要