| |
- ScaleTransform(data=None, object=None, accumulate=None, attribute_name_column=None, attribute_value_column=None, **generic_arguments)
- DESCRIPTION:
ScaleTransform() function scales specified columns in input data, using ScaleFit() function output.
PARAMETERS:
data:
Required Argument.
Specifies the input teradataml DataFrame.
Types: teradataml DataFrame
object:
Required Argument.
Specifies the teradataml DataFrame containing the output of generated by
ScaleFit() function or the instance of ScaleFit.
Types: teradataml DataFrame or ScaleFit
accumulate:
Optional Argument.
Specifies the names of input teradataml DataFrame columns to copy to the output.
Types: str OR list of Strings (str)
attribute_name_column:
Optional Argument.
Specifies the column name in the "attribute_data" which contains attribute names.
Note:
* This is required for sparse input.
Types: str
attribute_value_column:
Optional Argument.
Specifies the column name in the "attribute_data" which contains attribute values.
Note:
* This is required for sparse input.
Types: str
**generic_arguments:
Specifies the generic keyword arguments SQLE functions accept.
Below are the generic keyword arguments:
persist:
Optional Argument.
Specifies whether to persist the results of the function in a table or not.
When set to True, results are persisted in a table; otherwise, results
are garbage collected at the end of the session.
Default Value: False
Types: boolean
volatile:
Optional Argument.
Specifies whether to put the results of the function in a volatile table or not.
When set to True, results are stored in a volatile table, otherwise not.
Default Value: False
Types: boolean
Function allows the user to partition, hash, order or local order the input
data. These generic arguments are available for each argument that accepts
teradataml DataFrame as input and can be accessed as:
* "<input_data_arg_name>_partition_column" accepts str or list of str (Strings)
* "<input_data_arg_name>_hash_column" accepts str or list of str (Strings)
* "<input_data_arg_name>_order_column" accepts str or list of str (Strings)
* "local_order_<input_data_arg_name>" accepts boolean
Note:
These generic arguments are supported by teradataml if the underlying
SQL Engine function supports, else an exception is raised.
RETURNS:
Instance of ScaleTransform.
Output teradataml DataFrames can be accessed using attribute
references, such as ScaleTransformObj.<attribute_name>.
Output teradataml DataFrame attribute name is:
result
RAISES:
TeradataMlException, TypeError, ValueError
EXAMPLES:
# Notes:
# 1. Get the connection to Vantage to execute the function.
# 2. One must import the required functions mentioned in
# the example from teradataml.
# 3. Function will raise error if not supported on the Vantage
# user is connected to.
# Load the example data.
load_example_data("teradataml", ["scale_housing"])
load_example_data('scale', ["scale_attributes", "scale_parameters",
"scale_input_partitioned", "scale_input_sparse","scale_input_part_sparse"])
# Create teradataml DataFrame.
scaling_house = DataFrame.from_table("scale_housing")
scale_attribute = DataFrame.from_table("scale_attributes")
scale_parameter = DataFrame.from_table("scale_parameters")
scale_inp_part = DataFrame.from_table("scale_input_partitioned")
scale_inp_sparse = DataFrame.from_table("scale_input_sparse")
scale_inp_part_sparse = DataFrame.from_table("scale_input_part_sparse")
# Check the list of available analytic functions.
display_analytic_functions()
# Example 1: Scale "lotsize" with respect to mean value of the column.
fit_obj = ScaleFit(data=scaling_house,
target_columns="lotsize",
scale_method="MEAN",
miss_value="KEEP",
global_scale=False,
multiplier="1",
intercept="0")
# Print the result DataFrame.
print(fit_obj.output)
# Scale "lotsize" column.
# Note that teradataml DataFrame representing the model is passed as
# input to "object".
obj = ScaleTransform(data=scaling_house,
object=fit_obj.output,
accumulate="price")
# Print the result DataFrame.
print(obj.result)
# Example 2: Scale "lotsize" column. Note that model is passed as instance of
# ScaleFit to "object".
obj1 = ScaleTransform(data=scaling_house,
object=fit_obj,
accumulate="price")
# Print the result DataFrame.
print(obj1.result)
# Example 3: Create statistics to scale "fare" and "age" columns with respect to
# maximum absolute value for partitioned input.
fit_obj = ScaleFit(data=scale_inp_part,
attribute_data=scale_attribute,
parameter_data=scale_parameter,
target_columns=['fare', 'age'],
scale_method="maxabs",
miss_value="zero",
global_scale=False,
data_partition_column='pid',
attribute_data_partition_column='pid',
parameter_data_partition_column='pid')
obj = ScaleTransform(data=scale_inp_part,
object=fit_obj.output,
accumulate=['pid','passenger'],
data_partition_column='pid',
object_partition_column='pid')
# Print the result DataFrame.
print(obj.result)
# Example 4: Create statistics to scale "fare" column with respect to
# range for sparse input.
fit_obj = ScaleFit(data=scale_inp_sparse,
target_attribute=['fare'],
scale_method="range",
miss_value="keep",
global_scale=False,
attribute_name_column='attribute_column',
attribute_value_column='attribute_value')
obj = ScaleTransform(data=scale_inp_sparse,
object=fit_obj.output,
accumulate=['passenger'],
attribute_name_column='attribute_column',
attribute_value_column='attribute_value'
)
# Print the result DataFrame.
print(obj.result)
# Example 5: Create statistics to scale "fare" column with respect to
# maximum absolute value for sparse input with partition column.
fit_obj = ScaleFit(data=scale_inp_part_sparse,
parameter_data=scale_parameter,
attribute_data=scale_attribute,
scale_method="maxabs",
miss_value="zero",
global_scale=False,
attribute_name_column='attribute_column',
attribute_value_column='attribute_value',
data_partition_column='pid',
attribute_data_partition_column='pid',
parameter_data_partition_column='pid')
obj = ScaleTransform(data=scale_inp_part_sparse,
object=fit_obj.output,
accumulate=["passenger",'pid'],
attribute_name_column='attribute_column',
attribute_value_column='attribute_value',
object_partition_column='pid',
data_partition_column='pid')
# Print the result DataFrame.
print(obj.result)
|