Teradata Package for Python Function Reference | 20.00 - _LightgbmBoosterWrapper.deploy - Teradata Package for Python - Look here for syntax, methods and examples for the functions included in the Teradata Package for Python.
Teradata® Package for Python Function Reference - 20.00
- Deployment
- VantageCloud
- VantageCore
- Edition
- Enterprise
- IntelliFlex
- VMware
- Product
- Teradata Package for Python
- Release Number
- 20.00.00.03
- Published
- December 2024
- ft:locale
- en-US
- ft:lastEdition
- 2024-12-19
- dita:id
- TeradataPython_FxRef_Enterprise_2000
- lifecycle
- latest
- Product Category
- Teradata Vantage
- teradataml.opensource._lightgbm._LightgbmBoosterWrapper.deploy
= deploy(self, model_name, replace_if_exists=False)
- DESCRIPTION:
Deploys the model held by interface object to Vantage.
PARAMETERS:
model_name:
Required Argument.
Specifies the unique name of the model to be deployed.
Types: str
replace_if_exists:
Optional Argument.
Specifies whether to replace the model if a model with the same name already
exists in Vantage. If this argument is set to False and a model with the same
name already exists, then the function raises an exception.
Default Value: False
Types: bool
RETURNS:
The opensource object wrapper.
RAISES:
TeradataMLException if model with "model_name" already exists and the argument
"replace_if_exists" is set to False.
EXAMPLES:
# Import the required libraries and create LGBMClassifier Opensource object wrapper.
>>> from teradataml import td_lightgbm
# Example 1: Deploy the model trained using td_lightgbm.train() function to Vantage.
# Create Dataset object locally, assuming df_x and df_y are the feature and label teradataml
# DataFrames.
>>> lgbm_data = td_lightgbm.Dataset(data=df_x, label=df_y, free_raw_data=False)
>>> lgbm_data
<lightgbm.basic.Dataset object at ....>
# Train the model using `td_lightgbm` interface object.
>>> model = td_lightgbm.train(params={}, train_set=lgbm_data, num_boost_round=30, valid_sets=[lgbm_data])
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000043 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 532
[LightGBM] [Info] Number of data points in the train set: 400, number of used features: 4
[1] valid_0's l2: 0.215811
[2] valid_0's l2: 0.188138
[3] valid_0's l2: 0.166146
...
...
[29] valid_0's l2: 0.042255
[30] valid_0's l2: 0.0416953
# Deploy the model to Vantage.
>>> lgb_model = td_lightgbm.deploy("lgbm_train_model_ver_2", model)
>>> lgb_model
<lightgbm.basic.Booster object at ...>