Excuse me, Leibniz discriminant method of staggered series in infinite series of postgraduate mathematics, in order to explain monotonic decline, why is it also established when X is large enough? as follows
When x is large enough, it monotonically decreases, that is, there is N > 0, which makes f(x) monotonically decrease at (n, +∞).
And n = 1, 2, ..., n is only a finite term in the series,
Changing the finite term in the series will not affect the convergence and divergence of the series.
So the first n terms can be changed to 0, so the series is equivalent to starting from n = N+ 1
At this time, the usual Leibniz discriminant method is applied.