7.00.02 - Creating /aster on clusters that are Kerbros-enabled - Aster Execution Engine

Aster Instance Installation Guide for Aster-on-Hadoop Only

prodname
Aster Execution Engine
vrm_release
7.00.02
created_date
July 2017
category
Installation
featnum
B700-5022-700K
  • Kerberos clients must be installed on all nodes in the cluster. If the Kerberos clients (kinit and kdestroy) are not present, the installation will fail.
    • For TD SLES, the clients are installed by implementing the above instructions for TD SLES 11 SP3.
    • For RHEL, you must install the client packages on all nodes while configuring Kerberos.

  1. Execute steps 2 through 5 on the Key Distribution Center (KDC) server node. To locate the KDC server hostname, look in /etc/krb5.conf on the edge node for the kdc = <kdc_server_hostname> parameter.
  2. Log into the KDC server node.
  3. Determine if the HDFS principal is present in KDC: kadmin.local # listprincs
    1. If present, the output of the listprincs command will show an HDFS principal similar to hdfs@<domain-name>. For example:
      hdfs@CDH220.HADOOP.TERADATA.COM
    2. If the HDFS principal does not exist, use the kadmin console to create the HDFS principal. If the -randkey option is not specified, you will be prompted for a password. For example:
      kadmin.local:  addprinc -randkey hdfs@<domain-name>
      WARNING: no policy specified for hdfs@<domain-name>; 
defaulting to no policy
      Principal "hdfs@<domain-name>" created.
  4. Using the kadmin console, create the HDFS principal for the beehive user. For example:
    Replace the domain name CDH220.HADOOP.TERADATA.COM with the domain name identified in the previous step.
    kadmin.local:  addprinc -randkey beehive@CDH220.HADOOP.TERADATA.COM
    WARNING: no policy specified for beehive@CDH220.HADOOP.TERADATA.COM; defaulting to no policy
    Principal "beehive@CDH220.HADOOP.TERADATA.COM" created.
  5. Using the kadmin console, extract the hdfs.keytab and the beehive.keytab files to your current working directory. For example:
    Replace the domain name CDH220.HADOOP.TERADATA.COM with the domain name used in the previous step.
    kadmin.local:  xst -k hdfs.keytab hdfs@CDH220.HADOOP.TERADATA.COM
    Entry for principal hdfs@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:hdfs.keytab.
    Entry for principal hdfs@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:hdfs.keytab.
    Entry for principal hdfs@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type DES with HMAC/sha1 added to keytab WRFILE:hdfs.keytab.
    Entry for principal hdfs@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:hdfs.keytab.
    
    kadmin.local:  xst -k beehive.keytab beehive@CDH220.HADOOP.TERADATA.COM
    Entry for principal beehive@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:beehive.keytab.
    Entry for principal beehive@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:beehive.keytab.
    Entry for principal beehive@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type DES with HMAC/sha1 added to keytab WRFILE:beehive.keytab.
    Entry for principal beehive@CDH220.HADOOP.TERADATA.COM with kvno 4, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:beehive.keytab.
  6. Use scp to transfer the hdfs.keytab and beehive.keytab files to the edge node under the /root directory.
  7. Copy the hdfs.keytab file to the HDFS user’s home directory. For HDP, the home directory is /home/hdfs. For CDH, the home directory is /var/lib/hadoop-hdfs. For example:
    cp /root/hdfs.keytab /var/lib/hadoop-hdfs/     # /home/hdfs for HDP and /var/lib/hadoop-hdfs for CDH
    chown hdfs:hdfs /var/lib/hadoop-hdfs/hdfs.keytab
  8. Obtain a Kerberos ticket by issuing this command: su - hdfs -c "kinit -kt hdfs.keytab hdfs"
  9. Verify the Kerberos ticket by issuing this command: su - hdfs -c "klist" For example:
    cdh220e1:~ # su - hdfs -c "klist"
    
    Ticket cache: FILE:/tmp/krb5cc_105
    
    Default principal: hdfs@CDH220.HADOOP.TERADATA.COM
    
    Valid starting     Expires            Service principal
    06/10/16 00:42:15  06/11/16 00:42:15  krbtgt/CDH220.HADOOP.TERADATA.COM@CDH220.HADOOP.TERADATA.COM
    renew until 06/17/16 00:42:15
    
    Kerberos 4 ticket cache: /tmp/tkt105
    klist: You have no tickets cached
  10. Create /aster in HDFS by executing these commands: su - hdfs -c "hdfs dfs -ls /" su - hdfs -c "hdfs dfs -mkdir /aster" su - hdfs -c "hdfs dfs -chown -R beehive:beehive /aster" su - hdfs -c "hdfs dfs -ls /"
  11. On the edge (queen node), copy the beehive.keytab file and set appropriate permissions by executing these commands: mkdir /home/beehive/.keytab chown beehive:beehive /home/beehive/.keytab cp /root/beehive.keytab /home/beehive/.keytab chown beehive:beehive /home/beehive/.keytab/beehive.keytab