First, let's review the vertical operation of multiplication, as shown in the figure below.
First, multiply the rightmost number by the multiplicand and write it down accordingly. Then, multiply the second number on the right by the multiplicand, shift it to the left by one bit and write it down accordingly, so as to multiply all the bits of the multiplier. Finally, add up these figures accordingly and get the final result. In the whole calculation process, we are doing simple steps of translation, multiplication and summation. A complex product of large numbers can be decomposed into such basic operations with high efficiency.
Now look at this process from another angle, because the number is a decimal, it can always be decomposed into such a digital polynomial, which is a general polynomial, and the unknown is taken as 10.
Since we can calculate this when x= 10, can we calculate this when x is other values? The answer is in the picture below.
Polynomial multiplication can also be obtained by vertical operation, which is equally simple. Shift, product, sum. But there are also differences, for example, the result is greater than 10, preceded by a decimal number, which satisfies the relationship.
So every decimal number of 10, similar to other decimal numbers, has a similar carry relation:
binary system
Octal
Now, however, X is unknown, so the carry relation is no longer valid. So there is a number that exceeds 10. In fact, this number can be arbitrarily large, because X can be arbitrarily large.
Ok, there are some discussions about the number system interspersed in the middle, and now let's get back to the point. After the above discussion, we find that vertical operation can be easily extended to polynomial cases, so if you are greedy, can you extend it to more general cases? As we all know, polynomial is a special case of Taylor series, which is a Taylor series with finite terms. So can it be extended to the product of Taylor series? Of course, it is also possible, but it is too much trouble to write infinite items, so I still choose another promotion direction. Fourier polynomial is a finite form of Fourier series, so we can try to extend the vertical operation to Fourier polynomial.
As shown in the figure, vertical operation can also solve the multiplication of Fourier polynomials. However, there are new changes now. The bases of Fourier polynomials are positive and negative, so it is necessary to determine the translation position carefully when multiplying bit by bit. Here, because the cardinal number is negative once, the whole is shifted to the right by one place.
Therefore, vertical operation actually embodies the essence of such a polynomial multiplication. The coefficient multiplication of polynomials is often called convolution, and vertical operation is the manifestation of this coefficient multiplication.
Therefore, at least in limited cases, the understanding of convolution has been explored. For the continuous case, it involves some problems, convergence and operation between bases. Let's take this as an intuitive correspondence.
Next, I try to use the continuous basis vector space model to understand the convolution of functions. For the explanation of continuous basis vector space model, please refer to my previous four articles on linear operators and matrices.
First of all, any measurable function can be regarded as a vector in the continuous basis vector space. The base of this vector space can take any point in the set of real numbers. The function we consider can be written as a linear combination of continuous bases.
So the usual function multiplication gives
That is, point-by-point product, coefficients can only be multiplied when two bases are the same, and the function of bases is dispensable.
Convolution is
This is an ingenious vector operation, which makes use of the operational properties of continuous basis and can be compared with complex exponential basis in Fourier series, because continuous basis can be regarded as the limit of complex exponential basis.
In this way, convolution is a more natural and better operation, which is essentially a generalization of multiplication in finite-dimensional vector space or polynomial multiplication. Function multiplication is a rough operation, which loses many properties of vector space.
This article, a little long, was not sent out because there were some mistakes in the back part and the essence of convolution was not found.
Recently, I saw the convolution solution of signal and system and response signal, and some understandings of generalized function theory, and finally understood the essence of convolution.
The following is my understanding, and I may need to know something about measurement.
Generalized functions act on functions in the form of integrals, such as Dirac distribution, int[f(x)*δ(x)]dx=f(0), which can be regarded as a measure of the domain space of function F. Only at the origin, the mass is 1 and the rest is zero. This is only an image description, and the formal definition of this description is still complicated.
For example, consider the convolution form of generalized function, int[f(x)δ(t-x)]dx, that is, some changes have taken place, and the measure remains unchanged, but the position of the origin constantly changes with the change of parameter t, so the function changes from point to point.
So convolution is a generalized function with parameters, and he gives a weighted measurement grid. With the change of parameters, the position of the grid also changes.
Convolution neural network can be considered, that is, convolution operation in image processing. Although it is discrete, it gives the essential characteristics. It is a weight grid, such as a 3*3 convolution template, which gives these nine pixels a weight respectively, and then weights the initial pixels after inversion. Parameters play the role of moving the template, moving one pixel at a time until all pixels are calculated, and then the output is obtained and input into a pixel matrix.
Note: The figure on the right is the weight of the discrete convolution template, and the corresponding pixels should be multiplied by the corresponding multiples and then added. The circle on the left represents the center of the convolution template, and the function of the parameter is to move the center and calculate the value of each pixel.
The same is true for continuous situations involved in signal processing. The pixels here are pulse signals, and any signal can be expressed as a linear combination of pulse signals. This representation can be understood as the continuous basis of function space. In fact, it can also be understood that the function is defined on the real number set. For measurable functions, the value of each real number is independent. Therefore, for each real number corresponding to the pulse signal, the function can be converted into the sum of uncountable pulse signals multiplied by the corresponding function values, which can generally be simplified as an integral. Convolution is the weighted summation with such a continuous template, that is, the weighted integration, which is actually the function action, or the generalized function action. The function of the parameter is to move this continuous template until all the real numbers are calculated. You get the output, input a real function, and output a real function.
Note: The right figure shows the weight distribution of the continuous convolution template, and the vertical line is the center of the continuous convolution template. The function of the parameter is also to move this center and calculate the value of each real number.
At this point, the essence of convolution can be fully displayed. It is a functional action with parameters, that is, every point in the domain corresponds to a functional, and the output of each point is the action between the corresponding functional and the input function.
It is worth noting that convolution not only includes this functional action and parameter translation, but also flip operation, which is also an important feature of it. Therefore, to understand convolution, we must first know this effect, which has both positive and negative effects. Using the original input is a positive effect, using the flipped original input is a reaction, and convolution is a reaction.
Through convolution template, we can also understand why continuous function or measurable function with compact support set is very important, because such function can often ensure the convergence of convolution integral, that is, the existence of convolution, so it can also be considered as a suitable function set for convolution operation. Given this assumption in advance, you can use convolution operation without scruple.
This problem has finally been solved, which is really not easy. Finding the probability distribution of random variable function, product of time domain convolution and frequency domain, convolution template of image processing, convolution algorithm of response in signal processing, convolution theorem in integral transformation, convolution essence of polynomial product, convolution is indeed everywhere, but it is difficult to find a reasonable explanation. Either it is too phenomenological, trying to explain this abstract concept with pictures, or it is too theoretical, and the mathematical description is correct, but it can't give an image. However, I feel that the explanation I gave is still difficult to understand. However, it is clear enough for me to split a hodgepodge of operations into two meaningful steps. It's that simple, but it hasn't been discovered for years. Can only say that the world is wonderful, without so much comparison and accumulation, this essence is still difficult to find.
Which set of Montessori textbooks is better in kindergarten? There are also some in kindergarten.
To engage in Montessori, teachers must first have a Montessori certif