码迷,mamicode.com
首页 > 其他好文 > 详细

Data Compression Category

时间:2019-10-26 10:41:46      阅读:79      评论:0      收藏:0      [点我收藏+]

标签:pos   report   spec   any   sles   using   bytes   bit   level   

Data Compression is an approach to compress the origin dataset and save spaces. According to the Economist reports, the amount of digital dat in the world is growing explosively, which increase from 1.2 zettabytes to 1.8 zettabytes in 2010 and 2011. So how to compress data and manage storage cost-effectively is a challenging and important task.

Traditionally, we use compression algorithms to achieve data reduction. The main idea of data compression is "use the fewest number of bits to represent an information as accurately as possible". What we want to do is to represent the origin data information as accurately as possible, so it allows us to ignore some useless information when converting the encoded data to represented data. We can classify the classical compression approach into lossless compression and lossy compression. The difference between them is the loss of unnecessary information.

For lossless compression, it reduces data by identifying and eliminating statistical redundancy in reversible fashion. For removing redundant information. It can use statistical properties to build a new encoding system, like Huffman coding. Or it can use dictionary model, replacing the repeated strings with slide window algorithm. What a matter is that for a lossless compression, when we restore the data, we can get the origin data without losing any information.

For lossy compression, it reduces data by identifying unnecessary information and irretrievably removing it. For the removing unnecessary information, unnecessary information indeed has its own information, which may not be useful in some particular field. So it means lossy compression. In some filed, we just need useful information, and ignore useless information, so lossy compression methods works in Image, Audio, and Video. So we can‘t get the origin data when we use lossy compression algorithm.

For a lossless approach, when data become larger, eliminating statistical redundancy is unacceptable. Lossless approach needs data statistic information, counting all information. So for a large dataset, it must tradeoff between speed and compression ratio.

There are two methods to compress data, delta compression and data deduplication.

Delta compression is a new perspective to compress two very similar files. It compares two files, A and B, and calculates the delta A-B, so file B can be expressed as file A + delta A-B, which can save space. Delta compression is generally used in source code version, synchronization.

Data deduplication target large-scale system, which has a big granularity (file level or 8K kb size chunk level) the reason why using chunk-level instead of file level in data deduplication is chunk-level can achieve better compression performance. In general, data deduplication splits the back-up data into chunks, and identifies a chunk by its own cryptographically secure hash (SHA-1) signature. For some same chunks, it will remove the duplicate data chunks and store only one copy of that to achieve the goal (saving the space). It will only store the unique chunk, and file metadata, which can be used to reconstruct the origin file.

Data Compression Category

标签:pos   report   spec   any   sles   using   bytes   bit   level   

原文地址:https://www.cnblogs.com/wAther/p/11741973.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!