Support Vector Machines (SVMs) are classification algorithms. The objective of an SVM is similar to that of a binary logistic regression algorithm: Given a set of predictor variables, classify an object as having one of two possible outcomes. However:
- A binary logistic regression algorithm develops a probabilistic model from a training data set. Then, given test instance x, the algorithm estimates the probability that x belongs in a particular class.
- An SVM takes a training data set and seeks the boundary that maximizes the distance between the two classes. Then, given test instance x, the SVM determines the side of the boundary on which x lies, to predict its class.
Aster Analytics has two versions of SVM classifiers:
- SparseSVM functions use a linear kernel method for input in sparse format.
- DenseSVM functions can use linear or nonlinear kernel methods for input in dense format.
These SVM functions, though binary, can classify objects into more than two classes by using the machine-learning reduction technique one-against-all. One binary SVM is trained for each class. Each SVM labels the nth class positive and all other classes negative. Each SVM trains each test observation. The class for which the most observations are predicted to be positive is the resulting prediction.