When writing CSV file in teradatamlspk, the file is stored either in cloud storage or in local file system.
PySpark
df.write.csv(specified_path)
teradatamlspk
Store the file in cloud storage:
df.write.options(authorization = {"Access_ID": id, "Access_Key": key}).csv(specified_path)
Store the file in local file system:
from teradataml import fastexport fastexport(df.toTeradataml(), export_to="csv", csv_file="Test.csv")