🤖
AI Wiki
Gradient PlatformDocsGet Started FreeContact Sales
  • Artificial Intelligence Wiki
  • Topics
    • Accuracy and Loss
    • Activation Function
    • AI Chips for Training and Inference
    • Artifacts
    • Artificial General Intelligence (AGI)
    • AUC (Area under the ROC Curve)
    • Automated Machine Learning (AutoML)
    • CI/CD for Machine Learning
    • Comparison of ML Frameworks
    • Confusion Matrix
    • Containers
    • Convergence
    • Convolutional Neural Network (CNN)
    • Datasets and Machine Learning
    • Data Science vs Machine Learning vs Deep Learning
    • Distributed Training (TensorFlow, MPI, & Horovod)
    • Generative Adversarial Network (GAN)
    • Epochs, Batch Size, & Iterations
    • ETL
    • Features, Feature Engineering, & Feature Stores
    • Gradient Boosting
    • Gradient Descent
    • Hyperparameter Optimization
    • Interpretability
    • Jupyter Notebooks
    • Kubernetes
    • Linear Regression
    • Logistic Regression
    • Long Short-Term Memory (LSTM)
    • Machine Learning Operations (MLOps)
    • Managing Machine Learning Models
    • ML Showcase
    • Metrics in Machine Learning
    • Machine Learning Models Explained
    • Model Deployment (Inference)
    • Model Drift & Decay
    • Model Training
    • MNIST
    • Overfitting vs Underfitting
    • Random Forest
    • Recurrent Neural Network (RNN)
    • Reproducibility in Machine Learning
    • REST and gRPC
    • Serverless ML: FaaS and Lambda
    • Synthetic Data
    • Structured vs Unstructured Data
    • Supervised, Unsupervised, & Reinforcement Learning
    • TensorBoard
    • Tensor Processing Unit (TPU)
    • Transfer Learning
    • Weights and Biases
Powered by GitBook
On this page

Was this helpful?

  1. Topics

Generative Adversarial Network (GAN)

PreviousDistributed Training (TensorFlow, MPI, & Horovod)NextEpochs, Batch Size, & Iterations

Last updated 5 years ago

Was this helpful?

Generative Adversarial Networks (GANs) were introduced in 2014 by Ian Goodfellow and are a fast-growing area in deep neural networks that can be used to generate realistic images, speech, prose, and more.

Presented here is an image of a GAN-generated person who does not otherwise exist:

“[GANs are] the most interesting idea in the last 10 years in Machine Learning” -- Yann LeCun

Architecture

GANs are composed of two networks:

  • Generator: creates new synthetic data (e.g. an image) that tricks the discriminator into believing the fake data is authentic

  • Discriminator: evaluates samples passed from the generator and attempts to discern if the data (e.g. image) belongs to the training dataset, meaning it's authentic, or if it was generated, meaning it's fake

Steps in a GAN

  1. The generator creates a fake sample

  2. A generate (fake) sample is fed into the discriminator alongside a real sample taken from the training dataset

  3. The discriminator returns a prediction with a probability from 0 - 1 with 0 being definitely fake and 1 being definitely authentic

A fake image of a person generated by https://thispersondoesnotexist.com/
Source: becominghuman.ai