Before using the Spark SQL Connector (initiator or target), the Hadoop administrator must create the hdfs:///tdqg-spark/ directory. This directory stores the following files:
- A dummy text file created by the Spark SQL connector when used for the first time and is required for the Spark SQL connector to work.
- The cache files for user-defined foreign server objects used by the Spark SQL initiator.
- Temporary files when running the target connector using the Spark Application Execution Mechanism.
All users accessing the Spark SQL connector (initiator or target) must have WRITE access permission in the directory.
- Log on to any Hadoop node.
- Create the tdqg-spark directory:hdfs dfs -mkdir /tdqg-spark/
- Enter the permissions as in the following example:hdfs dfs -chmod 777 /tdqg-spark/The permission 777 is an example, actual permissions are determined by the Hadoop administrator if the requirements to create the directory are met.