Decision trees are typically classification models. A classification model (or classifier) predicts the value of a categorical variable. A decision tree variation called a regression tree is a regression model instead of a classification model.
A categorical variable has a predetermined set of values. The values can be nominal or ordinal, but decision trees typically treat ordinal and nominal values the same. An example of a categorical variable with nominal values is marital_status, whose value can be 'single', 'married', or 'divorced'. An example of a categorical variable with ordinal values is temperature, whose value can be 'low', 'medium', or 'high'.
A decision tree not only predicts the value of a categorical variable, it uses categorical variables as input or predictor variables.
Decision trees are appropriate for large numbers of input variables, mixed data types, and heterogenous data (that is, values whose relationships vary throughout the data space).
Decision trees provide insight into the structure of the data space and the meaning of a model, which can be as important as the accuracy of a model.