Current location - Training Enrollment Network - Mathematics courses - Remote sensing spectral information extraction
Remote sensing spectral information extraction
After obtaining new remote sensing spectral data, it is necessary to improve or develop new methods based on the original image processing technology in order to make better use of spectral data and mine new information. Remote sensing image processing is generally divided into four parts: image restoration, image enhancement, image synthesis and image classification. Image restoration processing refers to the correction and compensation of radiation distortion, geometric distortion, various noises and the loss of high-frequency information in the imaging process. It belongs to the category of preprocessing, which generally includes radiation correction, geometric correction, tangent correction, striping, digital amplification and mosaic, and is the first step of general remote sensing image processing. Image enhancement processing, also known as image information extraction, refers to amplifying the gray difference between objects in the image through some mathematical transformation to highlight the main target information or improve the visual effect of the image, improve the interpreter's discrimination ability or directly identify objects, which is the most important aspect of image processing in remote sensing applications. Because it not only needs to understand the formation mechanism of the image, but also can extract the target information from the image by studying the spectral and spatial characteristics of the target, there are many methods in this regard, including contrast enhancement, color enhancement, operation enhancement, transformation enhancement and so on. Image synthesis, also known as multi-information synthesis, is the most effective form of remote sensing application, and it is also the only way to comprehensively apply remote sensing, geographic information system and other applied disciplines to solve practical problems in the future. Pluralism here refers to a variety of remote sensing data sources and non-remote sensing data sources. Multi-information synthesis refers to the spatial registration and superposition of digital images and other types of data from different sources in the same area according to unified geographical coordinates, so as to compare or comprehensively analyze different data sources and reveal the essence of ground objects or phenomena. Thus, the purpose of solving practical problems is achieved. There are roughly two ways to realize it: one is to process the data of various data sources separately, and then superimpose and compare them; Second, when processing, the data of each channel of remote sensing data and other data sources are treated as variables, and finally comprehensively analyzed; Image classification processing refers to multi-band remote sensing data. According to the distribution characteristics of its pixels in multi-dimensional spectral space and certain statistical decision-making standards, different spectral clustering types are divided and identified by computer, so as to realize automatic classification and identification of targets. According to whether the training pixels of known classes need to be given before classification, it is divided into supervised classification and unsupervised classification. Unsupervised classification calculation is simple and easy to realize, but its accuracy is poor. The supervised classification calculation is complex, but it has high accuracy. It is generally suitable for training pixels of known classes, which requires high accuracy and gives the attributes of each pixel class. Because of different application fields and different target objects, some image classification methods suitable for specific objects can be developed.

Professor Zhu expressed the process of remote sensing digital image processing and the relationship between various parts in the form of Figure 4- 1, which is easy to understand and concise. The following will focus on two important links: remote sensing image information extraction and classification.

In order to obtain the target information, it is often necessary to suppress and eliminate the interference information in remote sensing images and highlight the useful information. At this time, the methods of image information enhancement and extraction are needed. Generally speaking, these methods can be divided into three categories: intensity difference based on spectral characteristics, variation law difference based on spectral characteristics and others.

Figure 4- 1 Basic Process of Remote Sensing Image Processing

1. Information extraction method based on spectral characteristic reflection intensity difference

(1) contrast enhancement

Contrast enhancement, also known as contrast expansion or stretching enhancement, is a processing technology that expands or stretches the distribution of gray values (spectral reflectivity of pixels) of an image to occupy the whole dynamic range (0 ~ 255), thereby expanding the gray differences between objects and resolving gray levels as much as possible. The gray value distribution of remote sensing images can generally be expressed by the frequency histogram of different gray pixels in an image, and its distribution form basically represents the resolution of the image to objects in this band and the dynamic range of gray value distribution. Contrast enhancement is to change the frequency histogram distribution of image pixel gray value, thereby expanding the dynamic range of gray value and enhancing information. Its processing object is a single-band image. Simply expressed as a functional relationship, contrast enhancement is:

y=f(x) (4- 1)

Where: y represents the gray value of pixels in the enhanced image; X represents the gray value of pixels in the original input image; The function f represents the enhancement mode, and there are different types of enhancement modes according to the difference of F (as shown in Figure 4-2). There are two processing methods: one is to transform each pixel point in the image by function transformation, which is often used when determining the enhanced object (ground object); The second is to change the gray structure relationship between pixels in the image, that is, to change the gray structure of the image through histogram adjustment, such as the common histogram equalization.

Figure 4-2 Several Different Contrast Enhancement Methods

(2) color enhancement

The ability of human eyes to distinguish colors is much stronger than that of pure black and white gray, so it has obvious advantages to highlight ground objects by using color enhancement. Color enhancement is generally divided into two types: one is single-band pseudo-color enhancement; The second is multi-band false color synthesis. The common methods of single-band pseudo-color enhancement are: ① color density segmentation; ② Gray-color conversion. The basic method of color density segmentation is: according to the gray value (pixel spectral reflectance) of the target object to be represented, the single-band image is divided into gray levels according to different gray values, and these target objects are given different gray levels, and then these gray levels are filled with different colors, so that the single-band image can be transformed into a pseudo-color image. This method is also commonly used in the result image after image classification to facilitate the distinction. When using this method, we must pay attention to giving different colors to spatially similar feature types.

Gray-color transformation is another more commonly used pseudo-color enhancement method. Compared with color density segmentation, it is easier to achieve the purpose of image enhancement in a wider color range. A set of typical gray-color transformation transfer functions are shown in Figure 4-3. Let L be the maximum gray level of the image band: (a) the graph represents the transfer function of red transformation, which means that all gray levels less than L/2 will be transformed into as dark red as possible, while the gray levels in the range of (L/2, 3L/4) will linearly evolve from dark red to Liang Cheng red, and the gray levels in the range of (3L/4, L) will be transformed into red. Similarly, Figure (b) and Figure (c) show the transfer functions of the green and blue transformations, respectively. Figure (d) shows the combination of three color transfer functions. It is not difficult to see that the image pixels belonging to the left gray level in Figure (d) are pure blue, the right is pure red, the middle point is pure green, and the remaining pixels are pseudo-colors of three colors. Obviously, using this combination scheme for pseudo-color enhancement, no two gray levels in the image have the same color.

Figure 4-3 Gray-color conversion transfer function

(a), (b) and (c) convert the gray scale into red, green and blue; (4) Comprehensive transfer function

In order to make better use of the information of multi-band images and improve the understanding of images, color synthesis can also be used to enhance the information. Its basic principle is similar to the above-mentioned single-band pseudo-color enhancement, except that the red, green and blue transformations are not carried out at different gray levels of the same band, but are carried out at three (or two) bands respectively, that is, the CCT values of the three (or two) bands directly control the light intensity output of the red, green and blue guns of the color display device in the image processing system according to the set transformation relationship table between the gray levels and colors of the bands, and are displayed on the color screen through addition synthesis. Or scan them into color films of three colors in turn and print them into color photos.

(3) Principal component analysis

Principal component analysis is the most commonly used method in remote sensing lithologic information extraction. By calculating the variance-covariance matrix or correlation matrix of image data, it obtains their eigenvalues and eigenvectors, and then transforms them back to remote sensing images, thus achieving the effect of image information concentration and data compression. It uses the difference between the target rock and the background object to process the whole image, and finally gets the required target information. Principal component analysis (PCA) is one of the most commonly used methods to extract and enhance multi-band remote sensing image information, which is also known as K-L(kahunen-loeve) transformation and principal component transformation. Principal component analysis is used for remote sensing; It is mainly used for image coding and image data compression, image information extraction and enhancement, image change monitoring and the investigation of potential multi-time dimensions of image data. Mathematically, it is a multidimensional orthogonal linear transformation based on image statistical characteristics. Geometrically speaking, it is equivalent to the spatial rotation transformation of the image, and the transformed principal components are orthogonal and irrelevant. In fact, it is also a method based on the spectral reflection intensity of ground objects, that is, the principal component distance. Simply put, principal component analysis is divided into three steps: ① calculating variance-covariance matrix and correlation matrix of input image data; ② Calculate the eigenvalues and eigenvectors of the matrix; ③ Calculate the principal component.

When the matrix used is variance-covariance matrix, principal component analysis is called nonstandard principal component analysis. When the matrix used is a correlation matrix, as a standard principal component analysis, it is called principal component analysis. Singh and Harrison studied the Landsat MSS data of northern and eastern India in 1985. The results show that the standard principal component analysis improves the signal-to-noise ratio and enhances the image information. Ek-lundh and Singh conducted principal component analysis on Landsat TM. Spot and other four kinds of data are in 1993. The analysis results show that compared with non-standard principal component analysis, standard principal component analysis improves the signal-to-noise ratio of the image.

Selecting principal component analysis was put forward by Crosta A.P et al. in 1989, that is, selecting geological bands for principal component analysis. In 1990, Loughlin W.P divided Landsat TM data into two groups, namely 1, 3, 4, 5 and 1, 4, 5 and 7, which were used as principal component transformations respectively. By comparing the eigenvector loads of mineral spectral curves in PCA images, namely 1, 3. Its essence is to extract iron oxide information by extending the spectral contrast between TM5 and TM 1, TM5 and TM4 through principal component transformation, and to extract the spectral information of hydroxyl-containing minerals by extending the spectral contrast between TM5 and TM7.

2. Information extraction method based on the change law of spectral characteristic reflectivity

Operational enhancement is a method to extract image information and enhance contrast of multi-band images by adding, subtracting, multiplying, dividing and their mixed operations. John McMann. Moore et al. (1993) enhanced the selectivity of gypsum, clay and hydrothermal altered silicon in TM image data by adding and subtracting between bands, and achieved good results. In fact, the most common operation in image enhancement is Divison, which is usually called ratio operation. The ratio operation is to use the different spectral reflection characteristics of different ground objects in different bands of image data to carry out inter-band Divison, extract ground object information and enhance image contrast. According to the difference of numerator and denominator of division, ratio operation can be simply divided into simple ratio, combined ratio and standardized ratio.

Because the simple ratio is simple and easy to operate, and the contrast enhancement effect is remarkable, people have done a full study on the inter-band ratio of TM image data from common remote sensing data sources, and used them to enhance and extract image vegetation information, rock alteration information and so on. Table 4- 1 gives several main simple ratios between TM data bands.

Table 4-4- 1 TM data bands: Several main simple ratios

(Adapted from Tong Qingxi et al. (1994))

When ratio enhancement is used for image enhancement, its basic functions are as follows: ① It can amplify the tiny gray difference between different objects, which is beneficial to distinguish objects with less obvious spectral difference, such as rocks and soil, and can also be used for the study of vegetation types and distribution, which can eliminate or weaken the influence of environmental factors such as topography. ② It can be used to extract rock information and alteration information closely related to mineralization. ③ Color synthesis can be used to enhance the information expression of ground objects and highlight the target information, that is, the ratio image corrected by atmospheric scattering has nothing to do with illumination, incident angle of the sun and diffuse reflection. Its disadvantage is that the independent spectral significance of the ratio image does not exist, and the total reflection intensity (reflectivity) information of the ground object is lost, and the terrain information of the image is lost. Practice has proved that it is quite difficult to identify mineral information on black-and-white scale images. If three ratio images that can extract mineral information are selected, and red, green and blue are combined according to the principle of colorimetry, so that mineral information and surrounding rock appear in different colors on the images, mineral information can be directly identified on the images by visual inspection, and its position can be determined. Therefore, it can be considered that the ratio color synthesis method is the basic method of mineral information extraction.

3. Others

(1) direct principal component analysis

Frazer S.J and Green A.A put forward directed principal component analysis in 1987. By principal component transformation of two ratio images (one is vegetation image TM4/TM3 and the other is altered image, such as TM5/TM7 and TM5/TMI), alteration information can be enhanced and spectral interference of vegetation can be suppressed. Zhao et al. (199 1) used a similar method to extract hydrothermal alteration information, and Frazer S.J.( 199 1) distinguished and identified iron oxides. Zhang Manlang (1996) improved this direct principal component analysis. Input TM7, TM 1, TM4, TM3 for the ratio of TM 1 and (TM4/TM3) principal component analysis. The generated PC2 enhanced the spectral information of iron oxides and suppressed the spectral interference of vegetation. PC2 generated by principal component analysis with the ratios of TM5, TM7, TM4 and TM3 (TM5/TM7, TM4/TM3) as inputs enhances the spectral information of hydroxyl-containing minerals. Srikant and Moore. J. Guo (1994) made principal component analysis on the log residual image of spatial TM data, which improved the spectral difference of topographic features in the image, and used direct principal component analysis to analyze the image sub-region of southwest Spain, which successfully improved the spectral contrast of iron minerals.

(2) Ratio method-characteristic principal component analysis

This method is based on ratio processing and characteristic principal component analysis. The enhancement of alteration information by ratio method is mostly limited by regional natural conditions and the spectral exclusion law of interference elimination (vegetation, atmosphere, lichen, etc.) ) It is necessary to collect spectral data of various ground objects, which is also limited objectively. Liu Zhijie (1995) put forward the ratio-characteristic principal component mixed analysis method, and the results are as follows.

1) determination of spectral information image (F image for short) of iron-containing minerals

Firstly, TM 1, 3, 4 and 5 are used as a group to find the images of iron-bearing minerals. In the same F image, the information of hydroxyl-containing minerals will be covered up, and the principal components will be transformed. The transformed PCA image will be qualitatively analyzed as much as possible to determine the F image. The FPCA image must meet the following requirements: any one of TM3 and TM 1 has opposite contribution marks, or TM3 and TM4 have opposite contribution marks, or TM5 and TM4 have opposite contribution marks; At least one of TM3 or TM5 is a strong load.

2) Determination of spectrum information image (abbreviated as H image) of hydroxyl-containing minerals.

Similar to the determination method of F image, the transformation process of extracting H image is carried out. The difference is that when selecting the original band combination, two ratio images are used: (TM5/TM7) and (TM4/TM3) instead of TM 1, 4, 5, 7. The reasons are as follows: First, PC4 determined after the latter group of images transformation needs preprocessing to meet the requirements of H images. Second, the final synthetic treatment has no effect. The brightness index of the original F and H images is very low. In order to produce a good visual effect and help to further explain the F image, the histogram of the H image is balanced, and the enhanced TM7 image is combined with them in red, blue and pseudo color.

(3)IHS transform (intensity-hue-saturation transform)

In colorimetry, colors can be represented by three primary colors: red (R), green (G) and blue (B), and can also be described by chromaticity variables perceived by human eyes: brightness (I), hue (H) and saturation (S). The above two variables constitute two color coordinate systems in colorimetry: RGB color space and IHS color space (also called Munsell). The relationship between these two systems can be shown in Figure 4-4. In this figure, the I axis is perpendicular to the paper (passing through S=0, white), and there is only the difference in brightness along the I axis: the circle represents the change of H, and the red color is set to H = 0;; The radial direction represents saturation, and the white with the center of S=0 and the circumference of S= 1 is the purest color. Obviously, there is a certain relationship between RGB color space coordinate system and IHS color space coordinate system, and the mathematical model of color transformation to determine the transformation relationship between them is called IHS transformation or color coordinate transformation (Munsell transformation). Traditionally, the transformation from RGB space to IHS space is called forward transformation, and the transformation from IHS space to RGB space is called inverse transformation.

Fig. the relationship between RGB color space and IHS chromaticity space

In recent years, researchers at home and abroad have paid more and more attention to IHS color transformation because of its flexible and practical advantages, so many IHS transformation formulas have been produced. At present, in remote sensing image processing, IHS transform is mainly used in the following three aspects:

1) saturation enhancement of color composite images.

2) Composite display of remote sensing images with different resolutions. For example, combining Landsat MSS with digital aerial photos through IHS transformation can produce color images with spectrum (from green to near infrared) and spatial attributes (10m) like SPOT.

3) Comprehensive display of multi-source data. Digitize geological information such as ground objects and geochemical exploration, take it as H or S chromaticity variable, take remote sensing image as I, and carry out IHS forward transformation, so as to obtain a composite image of color remote sensing, ground objects, geochemical exploration and other geological information. Generally, this kind of images not only have clear geomorphological and geological background of remote sensing images, but also accurately reflect geological information such as geophysical and geochemical exploration on this background, which is very beneficial to comprehensively analyze and explain the relationship between them.

(4) Decorrelated stretch transformation.

Uncorrelated stretch transformation is a technique based on principal component transformation. It includes three obvious stages: ① converting the original image bands into their principal components. ② Contrast the principal components after stretching transformation. (3) performing inverse principal component transformation and displaying in the original color space. N.A. Campbell (1996) made a detailed study of these three stages. He believes that the decorrelation stretching transformation is essentially another linear transformation of spectral bands different from the principal component transformation. In the second stage, after comparing the normalized variance of stretching, the irrelevant variables of unit variance are obtained. The effect of enhancing the displayed image mainly depends on the special contrast produced by this method. He studied the Thermal Infrared Multispectral Scanner (TIMS) in Hawaii, USA, and found that a small transformation of the feature vector defining first principal component only led to a small change in the decorrelation stretching coefficient, but it produced a significantly different decorrelation stretching image. The decorrelation stretching analysis of TM data in six bands of visible light, near infrared and short-wave infrared shows that the results of decorrelation stretching are distorted, that is, some images are sensitive to the slight change of decorrelation stretching coefficient, while others are not. See the original text for specific calculation and analysis.

In a word, the decorrelation stretching transformation is a linear transformation of the original spectral band, which is usually the weighted sum and difference of the original spectral band. The research shows that this method is effective for some remote sensing image data, which can produce good image effect and provide a new starting point. Compared with principal component transformation, other image data processing results are poor. Like the image display of principal component transformation and typical analytic transformation, the actual decorrelated stretching vectors are rarely explained, so it is impossible to understand their resulting images from the spectrum.