DNS Configuration

Cloudera Distribution for Hadoop for Teradata Administrator Guide

brand
Open Source
prodname
Cloudera Distribution for Hadoop
vrm_release
5.8
5.9
category
Administration
featnum
B035-6049-086K
Hadoop nodes use a single name. The /etc/hosts file is used to resolve hostnames when communicating to nodes within its cluster. The name returned by the hostname -f command is used to resolve IP addresses. The Hadoop hostname must be the leftmost name on the line in /etc/hosts in order for Hadoop to recognize it. For example, a hosts file entry is similar to:
39.0.8.2     newname1
  • Hostnames should be lowercase as some functions require it (for example, Kerberos).
  • Do not change hostnames for your site configuration after Hadoop installation as unexpected results could occur. For more information, contact your Teradata Customer Support representative.

Internal IP addresses are either manually configured and included in the hosts file or assigned by Server Management via DHCP and referenced using the CMIC configuration.

External interfaces and corporate DNS must use the same hostnames that Hadoop uses on the internal BYNET network.

If an application or user requests data from the namenode service, the service returns a location based on the hostname Hadoop is using. When external clients access Hadoop, the Hadoop hostnames must be included in the corporate DNS or resolve the Hadoop hostname to its external interface using the local /etc/hosts.

The diagram illustrates the Hadoop cluster configured for external client access. The /etc/hosts file in each Hadoop node has hostnames associated with the internal BYNET network, while the external client’s /etc/hosts file is associated with the Hadoop node’s public IP.

Example of Networking Layout