2.06 - Creating an HDFS Directory for the Spark SQL Connector - Teradata QueryGrid

Teradata® QueryGrid™ Installation and User Guide

Product
Teradata QueryGrid
Release Number
2.06
Published
September 2018
Language
English (United States)
Last Update
2018-11-26
dita:mapPath
blo1527621308305.ditamap
dita:ditavalPath
ft:empty
dita:id
lfq1484661135852
Before using the Spark SQL Connector (initiator or target), the Hadoop administrator must create the hdfs:///tdqg-spark/ directory. This directory serves the following purpose:
  • It stores a dummy text file created by the Spark SQL connector when used for the first time and is required for the Spark SQL connector to work.
  • It stores the cache files for user-defined foreign server objects that are used by the Spark SQL initiator.
All users accessing the Spark SQL connector (initiator or target) must have WRITE access permission in the directory.
  1. Log in to any Hadoop node.
  2. Create the directory using command: hdfs dfs -mkdir /tdqg-spark/
  3. Enter the permissions as in the example below: hdfs dfs -chmod 777 /tdqg-spark/
    The permission 777 is an example, actual permissions are determined by the Hadoop administrator as long as the requirements are met to create the directory.