🤖
AI Wiki
Gradient PlatformDocsGet Started FreeContact Sales
  • Artificial Intelligence Wiki
  • Topics
    • Accuracy and Loss
    • Activation Function
    • AI Chips for Training and Inference
    • Artifacts
    • Artificial General Intelligence (AGI)
    • AUC (Area under the ROC Curve)
    • Automated Machine Learning (AutoML)
    • CI/CD for Machine Learning
    • Comparison of ML Frameworks
    • Confusion Matrix
    • Containers
    • Convergence
    • Convolutional Neural Network (CNN)
    • Datasets and Machine Learning
    • Data Science vs Machine Learning vs Deep Learning
    • Distributed Training (TensorFlow, MPI, & Horovod)
    • Generative Adversarial Network (GAN)
    • Epochs, Batch Size, & Iterations
    • ETL
    • Features, Feature Engineering, & Feature Stores
    • Gradient Boosting
    • Gradient Descent
    • Hyperparameter Optimization
    • Interpretability
    • Jupyter Notebooks
    • Kubernetes
    • Linear Regression
    • Logistic Regression
    • Long Short-Term Memory (LSTM)
    • Machine Learning Operations (MLOps)
    • Managing Machine Learning Models
    • ML Showcase
    • Metrics in Machine Learning
    • Machine Learning Models Explained
    • Model Deployment (Inference)
    • Model Drift & Decay
    • Model Training
    • MNIST
    • Overfitting vs Underfitting
    • Random Forest
    • Recurrent Neural Network (RNN)
    • Reproducibility in Machine Learning
    • REST and gRPC
    • Serverless ML: FaaS and Lambda
    • Synthetic Data
    • Structured vs Unstructured Data
    • Supervised, Unsupervised, & Reinforcement Learning
    • TensorBoard
    • Tensor Processing Unit (TPU)
    • Transfer Learning
    • Weights and Biases
Powered by GitBook
On this page
  • Introduction
  • Relationship to Machine Learning
  • Benefits of Kubernetes
  • Kubernetes + Gradient

Was this helpful?

  1. Topics

Kubernetes

PreviousJupyter NotebooksNextLinear Regression

Last updated 5 years ago

Was this helpful?

Introduction

Kubernetes (sometimes abbreviated as K8s) is an open source general-purpose container orchestration system initially developed by Google for running and managing . Historically, this meant deploying and scaling web applications but recently, the ML community has co-opted the technology to serve as the underlying orchestration layer for training and deploying ML models.

Kubernetes has recently exploded in popularity and even though it is a very new and relatively immature project, thousands of organizations have invested in the technology to manage their public cloud or on-premise deployments. Companies either deploy their own Kubernetes cluster or used managed cloud services such as EKS on AWS.

Relationship to Machine Learning

Kubernetes provides the raw ingredients of an ML platform but Kubernetes itself is not suitable for ML practitioners. It’s endlessly complicated and designed to be deployed and managed by DevOps teams. Gradient is an “ML layer” that runs on top of Kubernetes and targets Data Scientists. See below for more information.

Benefits of Kubernetes

Kubernetes can run on any infrastructure which facilitates multi-cloud adoption and applications portability.

Kubernetes has gained widespread adoption and is supported by a vast community so it gets more powerful every day. There is also a growing ecosystem of tools that are integrating with or building on top of Kubernetes.

Kubernetes + Gradient

Gradient is built on top of Kubernetes. This provides multi-cloud capability, interoperability with Kubernetes plugins, and robust scheduling and resource management.

containerized workloads and services