Generalized Deduplication: Lossless Compression by Clustering Similar Data

2019 IEEE 8th International Conference on Cloud Networking (CloudNet)(2019)

引用 4|浏览22
暂无评分
摘要
This paper proposes generalized deduplication, a concept where similar data is systematically deduplicated by first transforming chunks of each file into two parts: a basis and a deviation. This increases the potential for compression as more chunks can have a common basis that can be deduplicated by the system. The deviation is kept small and stored together with an identifier to its chunk, e.g., hash of a chunk, in order to recover the original data without errors or distortions. This paper characterizes the performance of generalized deduplication using Golomb-Rice codes as a suitable data transform function to discover similarities across all files stored in the system. Considering different synthetic data distributions, we show in theory and simulations that generalized deduplication can result in compression factors of 300 (high compression), i.e., 300 times less storage space, and that this compression is achieved with 60,000 times fewer data chunks inserted into the system compared to classic deduplication (compression gains start earlier). Finally, we show that the table/registry to recognize similar chunks is 10,000 times smaller for generalized deduplication compared to the table in classic deduplication techniques, which will result in less RAM usage in the storage system.
更多
查看译文
关键词
generalized deduplication,golomb rice,geometric distribution,data deduplication
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要