文件名称:为什么要进行重复数据删除?-BE2010官方培训PPT
文件大小:9.06MB
文件格式:PPT
更新时间:2024-05-12 07:17:00
SYMANTEC,BE2010
为什么要进行重复数据删除? 具有哪些优势? 提高网络带宽效率 释放宝贵的存储空间 恢复数据更容易 让管理员腾出宝贵的时间以完成其他更重要的任务 文件级和块级重复数据删除之间有何区别? 13 文件级 块级 13 Use this slide if your audience needs clarification on Deduplication and type of deduplication Backup Exec delivers in 2010: But before we go any further – how many of you are familiar with Deduplication and the various “types” of deduplication. What is Deduplication and how does it work? Data deduplication (often called "intelligent compression" or "single-instance storage") is a method of reducing storage needs by eliminating redundant data. Only one unique instance of the data is actually retained on storage media, such as disk or tape. Redundant data is replaced with a pointer to the unique data copy File vs Block Level Deduplication: Data deduplication can generally operate at the file, block, and even the bit level. File deduplication eliminates duplicate files. Block and bit deduplication looks within a file and saves unique iterations of each block or bit. Each chunk of data is processed using a hash algorithm. Backup Exec 2010 delivers “block level” deduplication thus providing a higher level of compression and storage savings over files level deduplication. Source: SearchStorage definitions: http://searchstorage.techtarget.com/sDefinition/0,,sid5_gci1248105,00.html# 13