SVM Optimization
See TD_SVM.
How Support Vector Machine Predicts on the Unseen Data
Support Vector Machine (SVM) uses a learned hyperplane to make predictions on unseen data points. The hyperplane is determined during the training phase by minimizing the error or maximizing the margin, depending on the type of task (classification or regression). The kernel trick is used to transform the input data into a higher-dimensional space if necessary to improve the performance of the SVM algorithm on complex, non-linear datasets.
Classification:
- Feature Extraction: Extracts the relevant features from the unseen data point.
- Distance Calculation: Calculates the distance between the new data point and the hyperplane that was learned during the training phase.
- Decision Rule: The sign of the distance indicates which side of the hyperplane the data point falls on.
- If the distance is positive, the data point is classified as belonging to the positive class.
- If the distance is negative, the data point is classified as belonging to the negative class.
- Confidence Estimation: The magnitude of the distance is also used to estimate the confidence level of the prediction.
- A larger distance indicates a higher confidence in the prediction.
- A smaller distance indicates lower confidence.
Regression:
- Feature Extraction: Extracts the relevant features from the new data point.
- Hyperplane Calculation: Calculates the predicted output value for the new data point based on the learned hyperplane during the training phase.
- Confidence Estimation: The distance between the predicted output value and the hyperplane is used to estimate the confidence level of the prediction.
- A smaller distance indicates a higher confidence in the prediction.
- A larger distance indicates lower confidence.
TD_SVMPredict function uses classification and regression tasks.