Current location - Training Enrollment Network - Mathematics courses - Lossy compression algorithm
Lossy compression algorithm
There are two basic categories: lossy and lossless.

Lossy compression: mainly some quantization algorithms, such as A-rate, U-rate and lloyds optimal quantization.

Lossless compression: mainly some coding algorithms, such as subband coding, differential coding, huffman encoding and so on.

In addition, although time-frequency transform has no compression effect, it is a good compression tool, such as fft and dct.

Finally, compressed sensing sparse reconstruction and so on.

Since the loss of information means some compromise between error and bit rate, we first consider distortion measurement-for example, square error. This paper introduces different quantizers, and each quantizer has different distortion behavior. The mathematical basis for the development of many lossy data compression algorithms is the study of stochastic processes.

Introduction:

When the histogram of an image is relatively flat, the compression of image data using lossless compression techniques (for example, huffman encoding, arithmetic coding, LZW) is relatively low. For image compression in multimedia applications that need higher compression ratio, lossy methods are usually used. In lossy compression, the compressed image is usually different from the original image, but similar in perception. In order to quantitatively describe how close the approximate value is to the original data, some form of distortion measurement is needed.

Distortion measurement:

Distortion measurement is a mathematical quantity, which uses some distortion standards to specify how close the approximate value is to the original value. When viewing compressed data, distortion will naturally be considered according to the numerical difference between the original data and the reconstructed data. However, when the data to be compressed is an image, such a metric may not produce the expected result.

For example, if the reconstructed image is the same as the original image, but it is shifted to the right by the vertical scanning line, it will be difficult for ordinary human observers to distinguish it from the original image, so it can be concluded that the distortion is very small. However, when the calculation is performed in a digital manner, we find a large distortion due to the large variation of each pixel of the reconstructed image. The problem is that we need a measure of perceptual distortion, not a more naive numerical method. However, the study of perceptual distortion is beyond the scope of this book.

Among many defined digital distortion metrics, we propose three metrics that are most commonly used in image compression. Mean square error (MSE) is often used if we are interested in the average pixel difference. It is defined as