# Confusion Matrix

![](https://2327526407-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LvBP1svpACTB1R1x_U4%2F-LvGspxW3Zko2589SZEN%2F-LvHIIphhNcOOkLBgk9j%2Fimage.png?alt=media\&token=c764d39b-1c22-4dc0-a118-4fe4f42e9f37)

A confusion matrix, typically represented as a table, is a popular [evaluation metric](https://machine-learning.paperspace.com/wiki/metrics-in-machine-learning) used to **describe the performance of a classification model** (or "classifier").  The table compares predicted and actual values.  The basic components of the table are as follows:

* **True positives (TP):** The prediction was yes, and the true value is yes
* **True negatives (TN):** The prediction was no, and the true value is no
* **False positives (FP):** The prediction was yes, but the true value was no
* **False negatives (FN):** The prediction was no, but the the true value is yes

### Related Metrics

The confusion matrix is closely related to other metrics like Precision, Recall/Sensitivity, Specificity, and F1 Score. Those definitions are as follows:

| **Metric**         | **Formula**           | **Definition**                                        |
| ------------------ | --------------------- | ----------------------------------------------------- |
| Accuracy           | (TP+TN)/(TP+TN+FP+FN) | Percentage of total items classified correctly        |
| Precision          | TP/(TP+FP)            | How accurate the positive predictions are             |
| Recall/Sensitivity | TP/(TP+FN)            | True positive rate (eg to asses false positive rate)  |
| Specificity        | TN/(TN+FP)            | True negative rate (eg to assess false negative rate) |
| F1 score           | 2TP/(2TP+FP+FN)       | A weighted average of precision and recall            |
