Understanding The Effectiveness of Lossy Compression in Machine Learning Training Sets
arxiv(2024)
摘要
Learning and Artificial Intelligence (ML/AI) techniques have become
increasingly prevalent in high performance computing (HPC). However, these
methods depend on vast volumes of floating point data for training and
validation which need methods to share the data on a wide area network (WAN) or
to transfer it from edge devices to data centers. Data compression can be a
solution to these problems, but an in-depth understanding of how lossy
compression affects model quality is needed. Prior work largely considers a
single application or compression method. We designed a systematic methodology
for evaluating data reduction techniques for ML/AI, and we use it to perform a
very comprehensive evaluation with 17 data reduction methods on 7 ML/AI
applications to show modern lossy compression methods can achieve a 50-100x
compression ratio improvement for a 1
critical insights that guide the future use and design of lossy compressors for
ML/AI.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要