|
- 2016
内容分块算法中预期分块长度 对重复数据删除率的影响
|
Abstract:
针对基于内容分块重复数据删除方法缺少能够定量分析预期分块长度与重复数据删除率之间关系的数学模型,导致难以通过调整预期分块长度优化重复数据删除率的问题,提出了一种基于Logistic函数的数学模型。在大量真实数据测观察基础上,提出了通过Logistic函数描述非重复数据的“S”形变化趋势,解决了该数据难以从理论上推导、建模的问题,证明了基于内容分块过程服从二项分布,并从理论上推导出了元数据大小模型。基于上述两种数据模型,通过数学运算最终推导得到重复数据删除率模型,并利用收集到的3组真实数据集对模型进行了实验验证。实验结果表明:反映数学模型拟合优度的R2值在0??9以上,说明该模型能够准确地反映出预期分块长度与重复数据删除率之间的数学关系。该模型为进一步研究如何通过调整预期分块长度使重复数据删除率最优化提供了理论基础。
A logistic function based mathematical model is proposed to solve the problem that, mathematical model is not used to quantitatively analyze the relationship between the expected chunk size and the deduplication ratio in the content defined chunking (CDC) based deduplication method, resulting in the difficulty in optimizing the deduplication ratio by adjusting the expected chunk size. The logistic function is used to describe the S??shaped variation trend of unique data on the basis of observation on a large number of real data sets, and the problem that the unique data is hard to be modeled by theoretical derivation is solved. It is assumed that the CDC process fits a binomial distribution, and based on the assumption two metadata models are deduced theoretically. The deduplication ratio model is finally deduced based on these two data models. Three realistic datasets are used to verify the deduplication ratio model. Experimental results show that the R2 value, which denotes the goodness of fit of the model, is greater than 0.9 in most results inferring the correctness of the model. The mathematical model may facilitate the study on optimization of deduplication ratio by reasonably setting the expected chunk size
[1] | [2]DUBNICKI C, GRYZ L, HELDT L, et al. Hydrastor: a scalable secondary storage [C]∥Proceedings of the USENIX Conference on File and Storage Technologies. Berkeley, CA, USA: USENIX Association, 2009: 197??210. |
[2] | [3]王龙翔, 张兴军, 朱国峰, 等. 重复数据删除中的无向图遍历分组预测方法 [J]. 西安交通大学学报, 2013, 47(10): 51??56. |
[3] | [12]DEBNATH B, SENGUPTA S, LI J. ChunkStash: speeding up inline storage deduplication using flash memory [C]∥Proceedings of the 2010 USENIX Conference on USENIX Annual Technical Conference. Berkeley, CA, USA: USENIX Association, 2010: 1??16. |
[4] | [14]RABIN M O. Department of computer science: TR??15??81 [R]. Cambridge, MA, USA: Havard University, 1981. |
[5] | [17]WIKIPEDIA. Svwiki dump [EB/OL]. (2016??01??11)[2016??04??20]. http: ∥dumps??wikimedia??org/svwiki/. |
[6] | [1]EMC. EMC data domain white paper [EB/OL]. (2014??07??01)[2016??04??20]. http: ∥www??emc??com/collateral/software/white??papers/h7219??data??domain??data??invul??arch??wp??pdf. |
[7] | [16]TORVALDS L. The linux kernel archives. [EB/OL]. (2016??04??02)[2016??04??20]. http: ∥kernel??org/. |
[8] | [18]RICHARDHIPP D. SQLite homepage [EB/OL]. (2016??03??02)[2016??04??20]. http: ∥sqlite??org/. |
[9] | [19]TPCC. TPC BENCHMARKTMC Standard Specification [EB/OL]. (2009??02??11)[2016??04??20]. http: ∥www??tpc??org/tpc_documents_current_versions/pdf/tpc??c_v5??11??0??pdf. |
[10] | WANG Longxiang, ZHANG Xingjun, ZHU Guofeng, et al. A grouping prediction method based on undirected graph traversal in de??duplication system [J]. Journal of Xi’an Jiaotong University, 2013, 47(10): 51??56. |
[11] | [4]QUINLAN S, DORWARD S. Venti: a new approach to archival data storage [C]∥Proceedings of the First USENIX Conference on File and Storage Technologies. Berkeley, CA, USA: USENIX Association, 2002: 89??101. |
[12] | [5]MIN J, YOON D, WON Y. Efficient deduplication techniques for modern backup operation [J]. IEEE Transactions on Computers, 2011, 60(6): 824??840. |
[13] | [6]TSUCHIYA Y, WATANABE T. DBLK: deduplication for primary block storage [C]∥Proceedings of the Symposium on Mass Storage Systems. Piscataway, NJ, USA: IEEE, 2011: 1??5. |
[14] | [7]SRINIVASAN K, BISSON T, GOODSON G, et al. iDedup: latency??aware, inline data deduplication for primary storage [C]∥Proceedings of the USENIX Conference on File and Storage Technologies. Berkeley, CA, USA: USENIX Association, 2012: 24??35. |
[15] | [8]MUTHITACHAROEN A, CHEN B, MAZIERES D. A low??bandwidth network file system [C]∥Proceedings of the 18th ACM Symposium on Operating Systems Principles. New York, USA: ACM, 2001: 174??187. |
[16] | [9]YOU L L, KARAMANOLIS C. Evaluation of efficient archival storage techniques [C]∥Proceedings of the 21st IEEE/12th NASA Goddard Conference on Mass Storage Systems and Technologies. Piscataway, NJ, USA: IEEE, 2004: 227??232. |
[17] | [10]ROMA?SKI B, HEIDT L, KILIAN W, et al. Anchor??driven subchunk deduplication [C]∥Proceedings of the 4th Annual International Conference on Systems and Storage. New York, USA: ACM, 2011: 1??13. |
[18] | [11]LILLIBRIDGE M, ESHGHI K, BHAGWAT D, et al. Sparse indexing: large scale, inline deduplication using sampling and locality [C]∥Proceedings of the USENIX Conference on File and Storage Technologies. Berkeley, CA, USA: USENIX Association, 2009: 111??123. |
[19] | [13]付印金, 肖侬, 刘芳, 等. 基于重复数据删除的虚拟桌面存储优化技术 [J]. 计算机研究与发展, 2012, 49(S1): 125??130. |
[20] | FU Yinjin, XIAO Nong, LIU Fang, et al. Deduplication based storage optimization technique for virtual desktop [J]. Journal of Computer Research and Development, 2012, 49(S1): 125??130. |
[21] | [15]LIU A. Dedup util homepage [EB/OL]. (2010??06??07)[2016??04??20]. http: ∥sourceforge??net/projects/dedu putil/. |