Teradata Package for R Function Reference | 17.20 - GLM - Teradata Package for R - Look here for syntax, methods and examples for the functions included in the Teradata Package for R.

Teradata® Package for R Function Reference

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Teradata Package for R
Release Number
17.20
Published
March 2024
Language
English (United States)
Last Update
2024-05-03
dita:id
TeradataR_FxRef_Enterprise_1720
Product Category
Teradata Vantage

GLM

Description

The generalized linear model td_glm_sqle function performs regression and classification analysis on data sets, where the response follows an exponential family distribution and supports the following models:

  • Regression (GAUSSIAN family): The loss function is squared error.

  • Binary Classification (BINOMIAL family): The loss function is logistic and implements logistic regression.
    The only response values are 0 or 1.

The function uses the Minibatch Stochastic Gradient Descent (SGD) algorithm that is highly scalable for large datasets. The algorithm estimates the gradient of loss in minibatches, which is defined by the "batch.size" argument and updates the model with a learning rate using the "learning.rate" argument.

The function also supports the following approaches:

  • L1, L2, and Elastic Net Regularization for shrinking model parameters.

  • Accelerated learning using Momentum and Nesterov approaches.

The function uses a combination of "iter.num.no.change" and "tolerance" arguments to define the convergence criterion and runs multiple iterations (up to the specified value in the "iter.max" argument) until the algorithm meets the criterion.

The function also supports LocalSGD, a variant of SGD, that uses "local.sgd.iterations" on each AMP to run multiple batch iterations locally followed by a global iteration.

The weights from all mappers are aggregated in a reduce phase and are used to compute the gradient and loss in the next iteration. LocalSGD lowers communication costs and can result in faster learning and convergence in fewer iterations, especially when there is a large cluster size and many features.

Due to gradient-based learning, the function is highly-sensitive to feature scaling.
Before using the features in the function, you must standardize the Input features using td_scale_fit_sqle() and td_scale_transform_sqle() functions.

The function only accepts numeric features. Therefore, before training, you must convert the categorical features to numeric values.

The function skips the rows with missing (null) values during training.

The function output is a trained td_glm_sqle model that is used as an input to the td_tdglm_predict_sqle() function. The model also contains model statistics of MSE, Loglikelihood, AIC, and BIC.
You can use td_regression_evaluator_sqle(), td_classification_evaluator_sqle(), and td_roc_sqle() functions to perform model evaluation as a post-processing step.

Usage

  td_glm_sqle (
      formula = NULL,
      data = NULL,
      input.columns = NULL,
      response.column = NULL,
      family = "GAUSSIAN",
      iter.max = 300,
      batch.size = 10,
      lambda1 = 0.02,
      alpha = 0.15,
      iter.num.no.change = 50,
      tolerance = 0.001,
      intercept = TRUE,
      class.weights = "0:1.0, 1:1.0",
      learning.rate = NULL,
      initial.eta = 0.05,
      decay.rate = 0.25,
      decay.steps = 5,
      momentum = 0.0,
      nesterov = TRUE,
      local.sgd.iterations = 0,
      ...
  )

Arguments

formula

Required Argument when "input.columns" and "response.column" are not provided,
optional otherwise.
Specifies a string consisting of "formula". Specifies the model to be fitted.
Only basic formula of the "col1 ~ col2 + col3 +..." form are supported and all variables must be from the same tdplyr tbl_teradata object.
Notes:

  • The function only accepts numeric features. User must convert the categorical features to numeric values, before passing to the formula.

  • In case, categorical features are passed to formula, those are ignored, and only numeric features are considered.

  • Provide either "formula" argument or "input.columns" and "response.column" arguments.

Types: character

data

Required Argument.
Specifies the input tbl_teradata.
Types: tbl_teradata

input.columns

Required Argument when "formula" is not provided, optional otherwise.
Specifies the name(s) of the column(s) in "data" to be used for
training the model (predictors, features or independent variables).
Note:

  • Provide either "formula" argument or "input.columns" and "response.column" arguments.

Types: character OR vector of Strings (character)

response.column

Required Argument when "formula" is not provided, optional otherwise.
Specifies the name of the column that contains the class label for
classification or target value (dependent variable) for regression.
Note:

  • Provide either "formula" argument or "input.columns" and "response.column" arguments.

Types: character

family

Optional Argument.
Specifies the distribution exponential family.
Permitted Values: BINOMIAL, GAUSSIAN
Default Value: GAUSSIAN
Types: character

iter.max

Optional Argument.
Specifies the maximum number of iterations over the training data
batches. If the batch size is 0, "iter.max" equals the number of epochs (an epoch is a single pass over entire training data). If there are 1000 rows in an AMP, and batch size is 10, then 100 iterations will result into one epoch and 500 iterations will result into 5 epochs over this AMP's data. Because it is not guaranteed that the data will be equally distributed on all AMPs, this may result into different number of epochs for other AMPs.
Note:

  • It must be a positive value less than 10,000,000.

Default Value: 300
Types: integer

batch.size

Optional Argument.
Specifies the number of observations (training samples) to be parsed
in one mini-batch. The value '0' indicates no mini-batches, the entire dataset is processed in each iteration, and the algorithm becomes Gradient Descent. A value higher than the number of rows on any AMP will also default to Gradient Descent.
Note:

  • It must be a non-negative integer value.

Default Value: 10
Types: integer

lambda1

Optional Argument.
Specifies the amount of regularization to be added. The higher the
value, the stronger the regularization. It is also used to compute the learning rate when the "learning.rate" is set to 'OPTIMAL'.
A value '0' means no regularization.
Note:

  • It must be a non-negative float value.

Default Value: 0.02
Types: float OR integer

alpha

Optional Argument.
Specifies the Elasticnet parameter for penalty computation. It only
becomes effective when "lambda1" greater than 0. The value represents the contribution ratio of L1 in the penalty. A value '1.0' indicates L1 (LASSO) only, a value '0' indicates L2 (Ridge) only, and a value in between is a combination of L1 and L2. Default value is 0.15.
Note:

  • It must be a float value between 0 and 1.

Default Value: 0.15(15 Types: float OR integer

iter.num.no.change

Optional Argument.
Specifies the number of iterations (batches) with no improvement in
loss (including the tolerance) to stop training (early stopping).
A value of 0 indicates no early stopping and the algorithm will continue till "iter.max" iterations are reached.
Note:

  • It must be a non-negative integer value.

Default Value: 50
Types: integer

tolerance

Optional Argument.
Specifies the stopping criteria in terms of loss function improvement.
Training stops the following condition is met:
loss > best_loss - "tolerance" for "iter.num.no.change" times.
Notes:

  • Only applicable when "iter.num.no.change" greater than 0.

  • It must be a non-negative value.

Default Value: 0.001
Types: float OR integer

intercept

Optional Argument.
Specifies whether to estimate intercept or not based on
whether "data" is already centered or not.
Default Value: TRUE
Types: logical

class.weights

Optional Argument.
Specifies the weights associated with classes. If the weight of a class is omitted,
it is assumed to be 1.0.
Note:

  • Only applicable for 'BINOMIAL' family. The format is '0:weight,1:weight'. For example, '0:1.0,1:0.5' will give twice the weight to each observation in class 0.

Default Value: "0:1.0, 1:1.0"
Types: character

learning.rate

Optional Argument.
Specifies the learning rate algorithm for SGD iterations.
Permitted Values: CONSTANT, OPTIMAL, INVTIME, ADAPTIVE
Default Value:

  • 'INVTIME' for 'GAUSSIAN' family , and

  • 'OPTIMAL' for 'BINOMIAL' family.

Types: character

initial.eta

Optional Argument.
Specifies the initial value of eta for the learning rate. When
the "learning.rate" is 'CONSTANT', this value is applicable for all iterations.
Default Value: 0.05
Types: float OR integer

decay.rate

Optional Argument.
Specifies the decay rate for the learning rate.
Note:

  • Only applicable for 'INVTIME' and 'ADAPTIVE' learning rates.

Default Value: 0.25
Types: float OR integer

decay.steps

Optional Argument.
Specifies the decay steps (number of iterations) for the 'ADAPTIVE'
learning rate. The learning rate changes by decay rate after the specified number of iterations are completed.
Default Value: 5
Types: integer

momentum

Optional Argument.
Specifies the value to use for the momentum learning rate optimizer.
A larger value indicates a higher momentum contribution. A value of 0 means the momentum optimizer is disabled. For a good momentum contribution, a value between 0.6 and 0.95 is recommended.
Note:

  • It must be a non-negative float value between 0 and 1.

Default Value: 0.0
Types: float OR integer

nesterov

Optional Argument.
Specifies whether to apply Nesterov optimization to the momentum optimizer
or not.
Note:

  • Only applicable when "momentum" greater than 0

Default Value: TRUE
Types: logical

local.sgd.iterations

Optional Argument.
Specifies the number of local iterations to be used for Local SGD
algorithm. A value of 0 implies Local SGD is disabled. A value higher than 0 enables Local SGD and that many local iterations are performed before updating the weights for the global model. With Local SGD algorithm, recommended values for arguments are as follows:

  • local.sgd.iterations: 10

  • iter.max: 100

  • batch.size: 50

  • iter.num.no.change: 5

Note:

  • It must be a positive integer value.

Default Value: 0
Types: integer

...

Specifies the generic keyword arguments SQLE functions accept. Below
are the generic keyword arguments:

persist:
Optional Argument.
Specifies whether to persist the results of the
function in a table or not. When set to TRUE, results are persisted in a table; otherwise, results are garbage collected at the end of the session.
Default Value: FALSE
Types: logical

volatile:
Optional Argument.
Specifies whether to put the results of the
function in a volatile table or not. When set to TRUE, results are stored in a volatile table, otherwise not.
Default Value: FALSE
Types: logical

Function allows the user to partition, hash, order or local order the input data. These generic arguments are available for each argument that accepts tbl_teradata as input and can be accessed as:

  • "<input.data.arg.name>.partition.column" accepts character or vector of character (Strings)

  • "<input.data.arg.name>.hash.column" accepts character or vector of character (Strings)

  • "<input.data.arg.name>.order.column" accepts character or vector of character (Strings)

  • "local.order.<input.data.arg.name>" accepts logical

Note:
These generic arguments are supported by tdplyr if the underlying SQL Engine function supports, else an exception is raised.

Value

Function returns an object of class "td_glm_sqle" which is a named list containing object of class "tbl_teradata".
Named list member(s) can be referenced directly with the "$" operator using the name(s):

  1. result

  2. output.data

Examples

  
    
    # Get the current context/connection.
    con <- td_get_context()$connection
    
    # Load the example data.
    loadExampleData("glm_example", "admissions_train")
    
    # Create tbl_teradata object.
    admissions_train <- tbl(con, "admissions_train")
    
    # Check the list of available analytic functions.
    display_analytic_functions()
    
    # td_glm_sqle() function requires features in numeric format for processing,
    # so first let's transform categorical columns to numerical columns
    # using VAL td_transform_valib() function.
    
    # Set VAL install location.
    options(val.install.location = "VAL")
    
    # Define encoders for categorical columns.
    masters_code <- tdOneHotEncoder(values = c("yes", "no"),
                                    column = "masters",
                                    out.column = "masters")

    stats_code <- tdOneHotEncoder(values=c("Advanced", "Novice"),
                                  column="stats",
                                  out.column="stats")

    programming_code <- tdOneHotEncoder(values=c("Advanced",
                                                "Novice",
                                                "Beginner"),
                                        column="programming",
                                        out.column="programming")
    # Retain numerical columns.
    retain <- tdRetain(columns=c("admitted", "gpa"))
    
    # Transform categorical columns to numeric columns.
    glm_numeric_input <- td_transform_valib(data=admissions_train,
                                            one.hot.encode=c(masters_code,
                                                             stats_code,
                                                             programming_code),
                                            retain=retain)
    
    # Example 1 : Generate generalized linear model(GLM) using
    #             input tbl_teradata and provided formula.
    GLM_out_1 <- td_glm_sqle(
                    formula = admitted ~ gpa + yes_masters +
                    no_masters + Advanced_stats + Novice_stats +
                    Advanced_programming + Novice_programming + 
                    Beginner_programming,
                    data = glm_numeric_input$result,
                    learning.rate = 'INVTIME',
                    momentum = 0.0
                    )
    
    # Print the result.
    print(GLM_out_1$result)
    print(GLM_out_1$output.data)
    
    # Example 2 : Generate generalized linear model(GLM) using
    #             input tbl_teradata and input.columns and response.column
    #             instead of formula.
    GLM_out_2 <- td_glm_sqle(input.columns= 
                                    c("gpa", "yes_masters", "no_masters",
                                    "Advanced_stats", "Novice_stats",
                                    "Advanced_programming",
                                    "Novice_programming",
                                    "Beginner_programming"),
                             response.column = "admitted",
                             data = glm_numeric_input$result,
                             learning.rate = 'INVTIME',
                             momentum = 0.0)
    
    # Print the result.
    print(GLM_out_2$result)
    print(GLM_out_2$output.data)