Current location - Training Enrollment Network - Mathematics courses - Basic introduction of compressed sensing theory
Basic introduction of compressed sensing theory
Name: Wang Xinlei.

Student ID: 2101110262.

College: School of Communication Engineering

Compressed sensing of embedded cattle reading guide is one of the most dazzling achievements in the field of signal processing since the beginning of 2 1 century, and has been effectively applied in magnetic resonance imaging, image processing and other fields. Compressive sensing theory contains very subtle ideas behind its complex mathematical expressions. Based on an imaginative idea, supplemented by strict mathematical proof, compressive sensing has achieved magical results, breaking through the golden rule of signal processing-Nyquist sampling law. That is, in the process of signal sampling, the same effect as full sampling is achieved with fewer sampling points.

Compressive perception embedded in bovine nose; Undersampling; Sparse recovery

What is the main breakthrough of compressed sensing relative to Nyquist sampling law?

Mosaic ox script

A preliminary understanding of 1. Chartered surveyor

Compressive sensing is a signal sampling technology, which is a process of compressing data in the sampling process. We know that in the process of sampling an analog signal at a certain sampling frequency to obtain a digital signal, in order to completely retain the information in the original signal, the sampling frequency must be more than twice the highest frequency in the signal (Nyquist sampling theorem). But Candes and others also suggested that if the signal is sparse in frequency domain, it can be reconstructed and restored by sampling points far below the requirements of sampling theorem. Sampling in Nyquist theorem is equidistant sampling. If the sampling frequency is low, it will inevitably lead to aliasing. What about unequal sampling? What if it's random sampling? Random sampling will inevitably lead to spectrum leakage, but the leakage will be evenly distributed in the whole frequency domain and the leakage value is small. The maximum peak can be detected by setting the threshold, so it is possible to recover the original signal.

Figure 1 shows that the original analog signal is sparse in the frequency domain and consists of only three frequency components. In order to obtain a digital signal, it should be sampled in the time domain first. According to the compressed sensing theory, random subsampling can be performed in time domain, and then the spectrum obtained will leak as shown in the figure. However, the real frequency component of the original signal can be obtained by threshold detection, so that the original signal can be recovered.

2. Mathematical model of 2.CS

CS has two prerequisites:

Suppose: x is the original signal, with length n and sparsity k, unknown; φ is the measurement matrix, and the corresponding sampling process, namely compression process, such as random sampling, is known; The result after sampling is: y = φ x, which is also known; Therefore, the problem of compressive sensing is the process of solving the original signal X on the basis of knowing the measured value Y and the measurement matrix φ. However, the general signal X itself is not sparse, so it needs sparse representation on some sparse basis, that is, X = ψ s, where S is a sparse vector, that is, the expected sparse signal; ψ is a sparse basis matrix, also called sparse transformation matrix, such as Fourier transform.

So the last question is expressed as:

y =φψs =θs? ( 1)

It is known that y, φ, ψ, s and θ are called perceptual matrices. Perceptual matrix needs to satisfy the constraint equidistant principle (RIP), so it is necessary to measure matrix φ and sparse basis ψ to satisfy the irrelevance, that is, the sampling process has nothing to do with sparse process. Candes et al. also found that the independent and identically distributed Gaussian random measurement matrix can be called general compressed sensing measurement matrix, so the random measurement matrix satisfying Gaussian distribution has become the most commonly used observation matrix in CS.

3. Common methods of 3.CS

As we all know, (1) equation has countless solutions, so it is necessary to add constraints to get a unique solution. The equation is sparse, so we need to find the most sparse internal one of all the solutions in this equation.

There are generally three methods to solve the above equations: convex optimization algorithm, greedy algorithm and Bayesian theory. The common algorithms of CS are:

Basis pursuit (BP): BP algorithm is a convex optimization method.

Orthogonal Matching Pursuit Algorithm (OMP): OMP is a greedy algorithm.

Threshold iterative algorithm: including soft threshold iteration (ISTA) and iterative hard threshold (IHT). An improved ISTA method is Fast Threshold Iteration (f ISTA).

Inlaid cow reference

[1]. Danders, E.J. "Near-optimal signal recovery by random projection." IEEE Transactions on Information Theory of Universal Coding Strategy 52(2006).

[2]. Dr. Donohue. "Compressive sensing." IEEE Transactions on Information Theory 52.4 (2006):1289-1306.