The total deviation cannot be the sum of n deviations.
It is usually the sum of squares of deviations, that is, as the total deviation, and it is minimized so that the regression straight line is the one with the smallest q value among all straight lines. This method of minimizing the sum of squares of deviations is called the least square method:
Because the absolute value makes the calculation unchanged, people prefer to use: Q=(y 1-bx 1-a) in practical application? +(y2-bx2-a)? + +(yn-bxn-a)? This problem boils down to: when a and b take what values, Q is the smallest, that is, the "total distance" from the point straight line y=bx+a is the smallest.
Introduction to the solution of linear regression equation
1. Find the (arithmetic) average of two related variables with a given sample.
2. Calculate numerator and denominator respectively: (choose one of the two formulas) numerator.
3. Calculate b: b:b= numerator/denominator.