Accuracy and Loss

Accuracy and Loss are the two most well-known and discussed metrics in machine learning.

Accuracy

Accuracy is a method for measuring a classification model’s performance. It is typically expressed as a percentage. Accuracy is the count of predictions where the predicted value is equal to the true value. It is binary (true/false) for a particular sample. Accuracy is often graphed and monitored during the training phase though the value is often associated with the overall or final model accuracy. Accuracy is easier to interpret than loss.

Loss

A loss function, also known as a cost function, takes into account the probabilities or uncertainty of a prediction based on how much the prediction varies from the true value. This gives us a more nuanced view into how well the model is performing.

Unlike accuracy, loss is not a percentage — it is a summation of the errors made for each sample in training or validation sets. Loss is often used in the training process to find the "best" parameter values for the model (e.g. weights in neural network). During the training process the goal is to minimize this value.

The most common loss functions are log loss and cross-entropy loss (which yield the same result when calculating error rates between 0 and 1), as well as mean squared error, and likelihood loss.

Unlike accuracy, loss may be used in both classification and regression problems.

Relationship Between Accuracy and Loss

Most of the time we would observe that accuracy increases with the decrease in loss -- but this is not always the case. Accuracy and loss have different definitions and measure different things. They often appear to be inversely proportional but there is no mathematical relationship between these two metrics.

Last updated