Create an HDFS Directory | Spark SQL Connector | QueryGrid - Creating an HDFS Directory for the Spark SQL Connector - Teradata QueryGrid

QueryGridâ„¢ Installation and User Guide - 3.06

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
Lake
VMware
Product
Teradata QueryGrid
Release Number
3.06
Published
December 2024
ft:locale
en-US
ft:lastEdition
2024-12-07
dita:mapPath
ndp1726122159943.ditamap
dita:ditavalPath
ft:empty
dita:id
lxg1591800469257
Product Category
Analytical Ecosystem
Before using the Spark SQL Connector (initiator or target), the Hadoop administrator must create the hdfs:///tdqg-spark/ directory. This directory stores the following files:
  • A dummy text file created by the Spark SQL connector when used for the first time and is required for the Spark SQL connector to work.
  • The cache files for user-defined foreign server objects used by the Spark SQL initiator.
  • Temporary files when running the target connector using the Spark Application Execution Mechanism.

All users accessing the Spark SQL connector (initiator or target) must have WRITE access permission in the directory.

  1. Log on to any Hadoop node.
  2. Create the tdqg-spark directory:
    hdfs dfs -mkdir /tdqg-spark/
  3. Enter the permissions as in the following example:
    hdfs dfs -chmod 777 /tdqg-spark/
    The permission 777 is an example, actual permissions are determined by the Hadoop administrator if the requirements to create the directory are met.