Teradata Package for Python Function Reference on VantageCloud Lake - upload_features - Teradata Package for Python - Look here for syntax, methods and examples for the functions included in the Teradata Package for Python.
Teradata® Package for Python Function Reference on VantageCloud Lake
- Deployment
- VantageCloud
- Edition
- Lake
- Product
- Teradata Package for Python
- Release Number
- 20.00.00.08
- Published
- November 2025
- ft:locale
- en-US
- ft:lastEdition
- 2025-12-05
- dita:id
- TeradataPython_FxRef_Lake_2000
- Product Category
- Teradata Vantage
- teradataml.store.feature_store.models.FeatureCatalog.upload_features = upload_features(self, object, entity=None, filters=None, features=None, as_of=None, description=None)
- DESCRIPTION:
Uploads the features in to Feature Catalog from the DataFrame.
Notes:
* Values in Entity column(s) must be unique.
* Entity column(s) should not have null values.
* One can associate a feature with only one entity in a specific
data domain. Use other data domain if the feature with same name
is associated with same entity.
PARAMETERS:
object:
Required Argument.
Specifies the source to ingest feature values. It can be one of the following:
* teradataml DataFrame
* Feature group
* Process id
Notes:
* If "object" is of type teradataml DataFrame, then "entity"
and "features" should be provided.
* If "object" is of type str, then it is considered as
as process id of an existing FeatureProcess and reruns the
process. Entity and features are taken from the existing
feature process. Hence, the arguments "entity" and "features"
are ignored.
* If "object" is of type FeatureGroup, then entity and features
are taken from the FeatureGroup. Hence, the arguments "entity"
and "features" are ignored.
Types: DataFrame or FeatureGroup or str
entity:
Optional Argument.
Specifies Entity for DataFrame.
Notes:
* Ignored when "object" is of type FeatureGroup or str.
* If a string or list of strings is provided, then "object" should
have these columns in it.
* If Entity object is provided, then associated columns in Entity
object should be present in DataFrame.
Types: Entity or str or list of str
features:
Optional Argument.
Specifies list of features to be considered in feature process. Feature
ingestion takes place only for these features.
Note:
* Ignored when "object" is of type FeatureGroup or str.
Types: Feature or list of Feature or str or list of str.
filters:
Optional Argument.
Specifies filters to be applied on data source while ingesting
feature values for FeatureProcess.
Types: str or list of str or ColumnExpression or list of ColumnExpression.
as_of:
Optional Argument.
Specifies the time period for which feature values are ingested.
Note:
* If "as_of" is specified as either string or datetime.datetime,
then specified value is considered as starting time period and
ending time period is considered as '31-DEC-9999 23:59:59.999999+00:00'.
Types: str or datetime.datetime or tuple
description:
Optional Argument.
Specifies description for the FeatureProcess.
Types: str
RETURNS:
FeatureProcess.
RAISES:
TeradataMlException
EXAMPLES:
>>> load_example_data('dataframe', ['sales'])
>>> df = DataFrame("sales")
# Create FeatureStore repo 'vfs_v1'.
>>> from teradataml import FeatureStore
>>> fs = FeatureStore(repo='vfs_v1', data_domain='sales')
Repo vfs_test does not exist. Run FeatureStore.setup() to create the repo and setup FeatureStore.
>>> fs.setup()
True
# Create an instance of FeatureCatalog.
>>> fc = FeatureCatalog(repo='vfs_v1', data_domain='sales')
# Example 1: Upload features from DataFrame.
# Before uploading features, let's first look at available features.
>>> fc.list_features()
entity_name feature_id name data_type feature_type valid_start valid_end
>>> fp = fc.upload_features(object=df,
... entity=["accounts"],
... features=["Feb", "Jan", "Mar", "Apr"])
Process '01c70f05-4067-11f0-9e8a-fb57338c2e68' started.
Process '01c70f05-4067-11f0-9e8a-fb57338c2e68' completed.
# Verify the features are uploaded.
>>> fc.list_features()
feature_id name data_type feature_type valid_start valid_end
entity_name
accounts 4 Feb FLOAT CONTINUOUS 2025-06-12 05:28:42.916821+00: 9999-12-31 23:59:59.999999+00:
accounts 6 Apr BIGINT CONTINUOUS 2025-06-12 05:28:42.916821+00: 9999-12-31 23:59:59.999999+00:
accounts 5 Mar BIGINT CONTINUOUS 2025-06-12 05:28:42.916821+00: 9999-12-31 23:59:59.999999+00:
accounts 100002 Jan BIGINT CONTINUOUS 2025-06-12 05:28:42.916821+00: 9999-12-31 23:59:59.999999+00:
# Example 2: Upload features from FeatureGroup.
# Create a FeatureGroup object.
>>> fg = FeatureGroup.from_DataFrame(name="sales", entity_columns="accounts", df=df)
# Create FeatureCatalog object.
>>> fc = FeatureCatalog(repo='vfs_v1', data_domain='sales')
>>> fp = fc.upload_features(object=fg)
Process '01c70f05-4067-11f0-9e8a-fb57338c2e68' started.
Process '01c70f05-4067-11f0-9e8a-fb57338c2e68' completed.
# Verify the features are uploaded.
>>> fc.list_features()
feature_id name data_type feature_type valid_start valid_end
entity_name
accounts 4 Feb FLOAT CONTINUOUS 2025-06-12 05:28:42.916821+00: 9999-12-31 23:59:59.999999+00:
accounts 6 Apr BIGINT CONTINUOUS 2025-06-12 05:28:42.916821+00: 9999-12-31 23:59:59.999999+00:
accounts 5 Mar BIGINT CONTINUOUS 2025-06-12 05:28:42.916821+00: 9999-12-31 23:59:59.999999+00:
accounts 100002 Jan BIGINT CONTINUOUS 2025-06-12 05:28:42.916821+00: 9999-12-31 23:59:59.999999+00:
# Example 3: Upload features through process id.
# Create FeatureProcess object.
>>> fp = FeatureProcess(repo='vfs_v1',
... data_domain='sales',
... object=df,
... entity='accounts',
... features=['Jan', 'Feb', 'Mar', 'Apr'])
>>> fp.run()
Process '01c70f05-4067-11f0-9e8a-fb57338c2e68' started.
Process '01c70f05-4067-11f0-9e8a-fb57338c2e68' completed.
# Create FeatureCatalog object.
>>> fc = FeatureCatalog(repo='vfs_v1', data_domain='sales')
>>> fp = fc.upload_features(object=fp.process_id)