Gradient Boost is one of the most popular Machine Learning algorithms in use. And get this, it’s not that complicated! This video is the first part in a series that walks through it one step at a time. This video focuses on the main ideas behind using Gradient Boost to predict a continuous value, like someone’s weight. We call this, “using Gradient Boost for Regression”. In the next video, we’ll work through the math to prove that Gradient Boost for Regression really is this simple. In part 3, we’ll walk though how Gradient Boost classifies samples into two different categories, and in part 4, we’ll go through the math again, this time focusing on classification.
This StatQuest assumes that you already understand….
…and the tradeoff between Bias and Variance that plagues Machine Learning:
For a complete index of all the StatQuest videos, check out:
This StatQuest is based on the following sources:
A 1999 manuscript by Jerome Friedman that introduced Stochastic Gradient Boost:
The Wikipedia article on Gradient Boosting:
The scikit-learn implementation of Gradient Boosting:
If you’d like to support StatQuest, please consider…
…a cool StatQuest t-shirt or sweatshirt (USA/Europe):
…buying one or two of my songs (or go large and get a whole album!)
…or just donating to StatQuest!
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter: