TD_OneClassSVM Syntax Elements - Analytics Database

Database Analytic Functions

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Analytics Database
Release Number
17.20
Published
June 2022
Language
English (United States)
Last Update
2024-04-06
dita:mapPath
gjn1627595495337.ditamap
dita:ditavalPath
ayr1485454803741.ditaval
dita:id
jmh1512506877710
Product Category
Teradata Vantageā„¢
InputColumns
Specify the names of the input table columns that need to be used for training the model (predictors, features or independent variables).
MaxIterNum
[Optional] Specify the maximum number of iterations (minibatches) over the training data batches. Value is a positive integer less than 10,000,000. Default value is 300.
BatchSize
[Optional] Specify the number of observations (training samples) processed in a single minibatch per AMP. A value of 0 or higher than the number of rows on an AMP processes all rows on the AMP, such that the entire dataset is processed in a single iteration, and the algorithm becomes Gradient Descent. You must specify a non-negative integer. Default value is 10.
RegularizationLambda
[Optional] Specify the amount of regularization to be added. The higher the value, stronger the regularization. It is also used to compute learning rate when learning rate is set to optimal. Must be a non-negative float value. A value of 0 means no regularization. Default is 0.02.
Alpha
[Optional] Specify the Elasticnet parameter for penalty computation. It is only effective when RegularizationLambda is greater than 0. The value represents the contribution ratio of L1 in the penalty. A value of 1.0 indicates L1 (LASSO) only, a value of 0 indicates L2 (Ridge) only, and a value between is a combination of L1 and L2. Value is a float value between 0 and 1. Default is 0.15.
IterNumNoChange
[Optional] Specify the number of iterations (minibatches) with no improvement in loss including the tolerance to stop training. A value of 0 indicates no early stopping and the algorithm continues until MaxIterNum iterations are reached. Value is a non-negative integer. Default is 50.
Tolerance
[Optional] Specify the stopping criteria in terms of loss function improvement. Applicable when IterNumNoChange is greater than 0. Value is a positive integer. Default is 0.001.
Intercept
[Optional] Specify whether to estimate the intercept based on whether data is already centered. Default is true.
LearningRate
[Optional] Specify the learning rate algorithm. Learning rates are:
  • Constant
  • InvTime
  • Optimal
  • Adaptive
Default is invtime for Gaussian, optimal for Binomial.
InitialEta
[Optional] Specify the initial value of eta for learning rate. For LearningRate set to constant, this value is the learning rate for all iterations. Value is numeric. Default is 0.05.
DecayRate
[Optional] Specify the decay rate for learning rate. Only applicable for learning rates invtime and adaptive. Value is numeric. Default is 0.25.
DecaySteps
[Optional] Specify the number of iterations without decay for the adaptive learning rate. The learning rate changes by decay rate after this many iterations. Value is integer. Default is 5.
Momentum
[Optional] Specify the value to use for momentum learning rate optimizer. A larger value indicates higher momentum contribution. A value of 0 means momentum optimizer is disabled. For a good momentum contribution, a value between 0.6-0.95 is recommended. Value is a non-negative float between 0 and 1. Default is 0.
Nesterov
[Optional] Specify whether to apply Nesterov optimization to Momentum optimizer. Only applicable when Momentum is greater than 0. Default is false.
LocalSGDIterations
[Optional] Specify the number of local iterations to be used for Local SGD algorithm. A value of 0 implies Local SGD is disabled. A value higher than 0 enables Local SGD and multiple, equal to the value supplied by the user. With Local SGD algorithm, the recommended values for arguments are:
  • LocalSGDIterations: 10
  • MaxIterNum: 100
  • BatchSize: 50
  • IterNumNoChange: 5
Value is a positive integer. Default is 0.