🤖
AI Wiki
Gradient PlatformDocsGet Started FreeContact Sales
  • Artificial Intelligence Wiki
  • Topics
    • Accuracy and Loss
    • Activation Function
    • AI Chips for Training and Inference
    • Artifacts
    • Artificial General Intelligence (AGI)
    • AUC (Area under the ROC Curve)
    • Automated Machine Learning (AutoML)
    • CI/CD for Machine Learning
    • Comparison of ML Frameworks
    • Confusion Matrix
    • Containers
    • Convergence
    • Convolutional Neural Network (CNN)
    • Datasets and Machine Learning
    • Data Science vs Machine Learning vs Deep Learning
    • Distributed Training (TensorFlow, MPI, & Horovod)
    • Generative Adversarial Network (GAN)
    • Epochs, Batch Size, & Iterations
    • ETL
    • Features, Feature Engineering, & Feature Stores
    • Gradient Boosting
    • Gradient Descent
    • Hyperparameter Optimization
    • Interpretability
    • Jupyter Notebooks
    • Kubernetes
    • Linear Regression
    • Logistic Regression
    • Long Short-Term Memory (LSTM)
    • Machine Learning Operations (MLOps)
    • Managing Machine Learning Models
    • ML Showcase
    • Metrics in Machine Learning
    • Machine Learning Models Explained
    • Model Deployment (Inference)
    • Model Drift & Decay
    • Model Training
    • MNIST
    • Overfitting vs Underfitting
    • Random Forest
    • Recurrent Neural Network (RNN)
    • Reproducibility in Machine Learning
    • REST and gRPC
    • Serverless ML: FaaS and Lambda
    • Synthetic Data
    • Structured vs Unstructured Data
    • Supervised, Unsupervised, & Reinforcement Learning
    • TensorBoard
    • Tensor Processing Unit (TPU)
    • Transfer Learning
    • Weights and Biases
Powered by GitBook
On this page

Was this helpful?

  1. Topics

Interpretability

PreviousHyperparameter OptimizationNextJupyter Notebooks

Last updated 5 years ago

Was this helpful?

Interpretability, often used interchangeably with explainability, is the degree to which a model's predictions can be explained in straightforward human terms.

Deep neural networks are typically "opaque" due to their inherent complexity and can be difficult to decipher. By contrast, many classical machine learning algorithms are interpretable (e.g. linear regression, logistic regression, decision trees) though this is not always the case — SVM and XGBoost are notably difficult to interpret.

The Importance of Model Interpretability

The importance of interpretability is relative. For Netflix, if a prediction goes awry and a poor recommendation is made, aside from a monetary loss, the consequence is minimal. In this case, the "risk" incurred by deploying predictive models is easily outweighed by the benefit. But what about when machine learning is used to diagnose patients or determine credit worthiness? In these cases, explainability is not only important, it may be a regulatory concern -- especially in heavily regulated industries such as banking, medicine, and insurance.

There are several open source projects focused on this topic such as DeepLIFT and LIME.

Source: Interpretable Machine Learning by Christoph Molnar