Model Training

Training is where a machine learning system finds the ideal parameters of a model on its own. In the most common scenario (supervised learning with labeled training data), the model learns these parameters directly from the training data.

While training data is used to find optimal parameters of the model, the object of machine learning is to find the best set of parameters that produces accurate outcomes on new, unseen data, not training data.

The ML community has converged on terms like “jobs” and “experiments” to describe the iterative model-training process where each job or experiment represents a new iteration. This is similar to a code commit in software development. Generally speaking, this process involves executing code written locally in an IDE or Jupyter Notebook remotely on a GPU or CPU instance (or cluster of instances in the case of distributed training).

Training + Gradient

Gradient from Paperspace provides an interface to track model training and deployment.

Experiments structure your machine learning projects with automatic versioning, tagging, and life-cycle management. Experiments include hyperparameter search, distributed training, a Git integration, and infrastructure automation (job scheduling, unified logs, cluster management, and more).

Gradient also includes a Jupyter Notebook integration where a GPU-enabled Jupyter Notebook can be launched from your browser in seconds. Gradient Notebooks are fully-managed and do not require any setup or management of servers or dependencies.

Choose from a wide variety of templates that include all the frameworks, libraries, and drivers you need for machine learning. Customer dependencies can be installed in any notebook and dependencies are persistent across sessions.

Last updated