Current location - Training Enrollment Network - Mathematics courses - What are the differences among objective function, loss function and cost function in machine learning?
What are the differences among objective function, loss function and cost function in machine learning?
Y is the goal of machine learning. We establish a realistic model f(x, w) through mathematical methods, and use it to express the relationship between X and Y (of course, the quality of this relationship is closely related to the choice of model). F(x)=f(x, w) is what we generally call the objective function. After the model is determined, what we need to do is to optimize the alignment w, so that f(x, w) can describe the relationship between x and y more accurately. Of course, it is completely consistent, which is impossible in complex problems in reality, so there are errors. We use a function to represent the actual error, which is the "loss function" L(y, f(x, w)). In reality, this loss function cannot be expressed completely accurately. However, we need to use this loss function to quantify the error in order to guide the optimal adjustment of W. Therefore, we settle for the second best and find some approximate alternatives, such as mean square error, which we use to evaluate the model and guide the iteration of training. The main difference between loss function and cost function is that one is a real ideal and the other is a real defect.