Decision trees are a supervised learning technique used for both classification and regression problems. A decision tree creates a piecewise constant approximation function for the training data. Decision trees are used in data mining and supervised learning because they are robust to many problems with real world data, such as missing values, irrelevant variables, outliers in input variables, and differences in variable scales.
The single decision tree algorithm, implemented in the Single Decision Tree Functions, is easy to use and has few variables to tune. However, it is prone to overfitting and high variance. To help address this issue, Aster Analytics provides the Random Forest Functions, AdaBoost Functions, and XGBoost Functions. These functions create many trees from the same data set and combine the results to reduce the variance and the risk of overfitting.