- Ensure Kerberos is correctly configured on the Hadoop cluster.
- For CDH 5.5.1, 5.8.0 and 5.9.0 on Teradata SLES 11 SP3 clusters, work with your account team to obtain these Kerberos setup instructions, and confirm that the instructions are implemented:
https://teraworks.teradata.com/pages/viewpage.action?spaceKey=NGAP&title=CDH+Kerberos+Setup
- For HDP 2.3.4, 2.4.2, 2.5.3 and 2.5.5 on Teradata SLES 11 SP3 clusters, work with your account team to obtain these Kerberos setup instructions, and confirm that the instructions are implemented:
https://teraworks.teradata.com/display/NGAP/HDP+2.4+and+2.5+Kerberos+Setup
- For HDP 2.3.4, 2.4.2, 2.5.3 and 2.5.5 on Redhat clusters, confirm these instructions in the Ambari 2.4.1.0 Guide are implemented:
- For CDH 5.5.1, 5.8.0 and 5.9.0 on Teradata SLES 11 SP3 clusters, work with your account team to obtain these Kerberos setup instructions, and confirm that the instructions are implemented:
- Kerberos clients must be installed on all nodes in the cluster. If the Kerberos clients (kinit and kdestroy) are not present, the installation will fail.
- For TD SLES, the clients are installed by implementing the above instructions for TD SLES 11 SP3.
- For RHEL, you must install the client packages on all nodes while configuring Kerberos.
- Execute steps 2 through 5 on the Key Distribution Center (KDC) server node. To locate the KDC server hostname, look in /etc/krb5.conf on the edge node for the kdc = <kdc_server_hostname> parameter.
- Log into the KDC server node.
-
Determine if the HDFS principal is present in KDC:
kadmin.local #
listprincs
-
If present, the output of the listprincs command will show an HDFS principal similar to hdfs@<domain-name>. For example:
hdfs@CDH220.HADOOP.TERADATA.COM
-
If the HDFS principal does not exist, use the kadmin console to create the HDFS principal. If the -randkey option is not specified, you will be prompted for a password. For example:
kadmin.local: addprinc -randkey hdfs@<domain-name> WARNING: no policy specified for hdfs@<domain-name>; 
defaulting to no policy Principal "hdfs@<domain-name>" created.
-
If present, the output of the listprincs command will show an HDFS principal similar to hdfs@<domain-name>. For example:
-
Using the kadmin console, create the HDFS principal for the beehive user. For example:
Replace the domain name CDH220.HADOOP.TERADATA.COM with the domain name identified in the previous step.
kadmin.local: addprinc -randkey beehive@CDH220.HADOOP.TERADATA.COM WARNING: no policy specified for beehive@CDH220.HADOOP.TERADATA.COM; defaulting to no policy Principal "beehive@CDH220.HADOOP.TERADATA.COM" created.
-
Using the kadmin console, extract the hdfs.keytab and the beehive.keytab files to your current working directory. For example:
Replace the domain name CDH220.HADOOP.TERADATA.COM with the domain name used in the previous step.
kadmin.local: xst -k hdfs.keytab hdfs@CDH220.HADOOP.TERADATA.COM Entry for principal hdfs@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:hdfs.keytab. Entry for principal hdfs@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:hdfs.keytab. Entry for principal hdfs@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type DES with HMAC/sha1 added to keytab WRFILE:hdfs.keytab. Entry for principal hdfs@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:hdfs.keytab. kadmin.local: xst -k beehive.keytab beehive@CDH220.HADOOP.TERADATA.COM Entry for principal beehive@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:beehive.keytab. Entry for principal beehive@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:beehive.keytab. Entry for principal beehive@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type DES with HMAC/sha1 added to keytab WRFILE:beehive.keytab. Entry for principal beehive@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:beehive.keytab.
- Use scp to transfer the hdfs.keytab and beehive.keytab files to the edge node under the /root directory.
-
Copy the hdfs.keytab file to the HDFS user’s home directory. For HDP, the home directory is /home/hdfs. For CDH, the home directory is /var/lib/hadoop-hdfs. For example:
cp /root/hdfs.keytab /var/lib/hadoop-hdfs/ # /home/hdfs for HDP and /var/lib/hadoop-hdfs for CDH chown hdfs:hdfs /var/lib/hadoop-hdfs/hdfs.keytab
- Obtain a Kerberos ticket by issuing this command: su - hdfs -c "kinit -kt hdfs.keytab hdfs"
-
Verify the Kerberos ticket by issuing this command:
su - hdfs -c "klist"
For example:
cdh220e1:~ # su - hdfs -c "klist" Ticket cache: FILE:/tmp/krb5cc_105 Default principal: hdfs@CDH220.HADOOP.TERADATA.COM Valid starting Expires Service principal 06/10/16 00:42:15 06/11/16 00:42:15 krbtgt/CDH220.HADOOP.TERADATA.COM@CDH220.HADOOP.TERADATA.COM renew until 06/17/16 00:42:15 Kerberos 4 ticket cache: /tmp/tkt105 klist: You have no tickets cached
- Create /aster in HDFS by executing these commands: su - hdfs -c "hdfs dfs -ls /" su - hdfs -c "hdfs dfs -mkdir /aster" su - hdfs -c "hdfs dfs -chown -R beehive:beehive /aster" su - hdfs -c "hdfs dfs -ls /"
- On the edge (queen node), copy the beehive.keytab file and set appropriate permissions by executing these commands: mkdir /home/beehive/.keytab chown beehive:beehive /home/beehive/.keytab cp /root/beehive.keytab /home/beehive/.keytab chown beehive:beehive /home/beehive/.keytab/beehive.keytab