Linear Regression
Last updated
Last updated
Linear regression is an algorithm (belonging to both statistics and machine learning) that models the relationship between two or more variables by fitting a linear equation to a dataset. Independent variables are the features (input data) and dependent variables are the target (what you are trying to predict).
The technique is very simple and can be represented by this familiar equation:
However, this is typically written slightly differently in machine learning:
Or for a more advanced model with multiple features:
Where:
y is the predicted label
b is the bias (the intercept)
w1 is the coefficient or weight of the first feature (weight = m or slope)
x1 is a feature (an input)
Like logistic regression, gradient descent is typically used to optimize the values of the coefficients (each input value or column) by iteratively minimizing the loss of the model during training.
Mean squared error and mean absolute error are common loss functions for linear regression.
Regularization is a technique used to prevent overfitting by penalizing signals that provide too much explanatory power to a single feature.
Linear regression predictions are continuous (e.g. test scores from 0-100).
Logistic regression predictions classify items where only specific values or classes are allowed (e.g. binary classification or multiclass classification). The model provides a probability score (confidence) with each prediction.