Interpretability
Last updated
Last updated
Interpretability, often used interchangeably with explainability, is the degree to which a model's predictions can be explained in straightforward human terms.
Deep neural networks are typically "opaque" due to their inherent complexity and can be difficult to decipher. By contrast, many classical machine learning algorithms are interpretable (e.g. linear regression, logistic regression, decision trees) though this is not always the case — SVM and XGBoost are notably difficult to interpret.
The importance of interpretability is relative. For Netflix, if a prediction goes awry and a poor recommendation is made, aside from a monetary loss, the consequence is minimal. In this case, the "risk" incurred by deploying predictive models is easily outweighed by the benefit. But what about when machine learning is used to diagnose patients or determine credit worthiness? In these cases, explainability is not only important, it may be a regulatory concern -- especially in heavily regulated industries such as banking, medicine, and insurance.
There are several open source projects focused on this topic such as DeepLIFT and LIME.