You can export data to a remote system by using an INSERT statement to place data into an existing table. The table can be empty or it can contain data. If the table already contains data, the exported data is appended to the existing table.
Example: Using INSERT with Hive as the Initiator Connector
The example shows an INSERT using the Beeline command line shell. Hive is the initiator and the target connector is Teradata.
jdbc:hive2://localhost:10000> insert into cardata_remote select * from cardata_local;
Result:
INFO : Number of reduce tasks is set to 0 since there's no reduce operator INFO : number of splits:1 INFO : Submitting tokens for job: job_1472862876236_0011 INFO : The url to track the job: http://tdh127m2.labs.teradata.com:8088/proxy/application_1472862876236_0011/ INFO : Starting Job = job_1472862876236_0011, Tracking URL = http://tdh127m2.labs.teradata.com:8088/proxy/application_1472862876236_0011/ INFO : Kill Command = /usr/hdp/2.6.5.0-292/hadoop/bin/hadoop job -kill job_1472862876236_0011 INFO : Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0 INFO : 2016-09-09 14:42:21,870 Stage-0 map = 0%, reduce = 0% INFO : 2016-09-09 14:42:31,473 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 4.54 sec INFO : MapReduce Total cumulative CPU time: 4 seconds 540 msec INFO : Ended Job = job_1472862876236_0011 No rows affected (22.979 seconds)