Random Forest
Last updated
Last updated
Random forests are an ensemble learning technique that combines multiple decision trees into a forest or final model of decision trees that ultimately produces more accurate and stable predictions.
Random forests operate on the principle that a large number of trees operating as a committee (forming a strong learner) will outperform a single constituent tree (a weak learner). This is akin to the requirement in statistics to have a sample size large enough to be statistically relevant. Some individual trees may be wrong but as long as the individual trees are not making completely random predictions, their aggregate will form an approximation of the underlying data.
Bagging is the algorithmic technique used in the random forest scenario. This, we may recall, differs from the Gradient Boosting technique. Bagging trains individual decision trees on random samples of subsets of the dataset to reduce correlation. A benefit of bagging over boosting is that bagging can be performed in parallel while boosting is a sequential operation.
Individual decision trees are prone to overfitting and have a tendency to learn the noise in the dataset. Random Forests take an average of multiple trees -- so as long as the individual decision trees are not correlated, this strategy reduces overfitting and sensitivity to noise in the dataset.