Prerequisite
- Vantage downtime is required because scaling in reconfigures the database and migrates the EBS data volumes. Schedule a time that minimally impacts users.
- Make sure you have increased limits. Read about COP entries and make sure your system is properly configured.
Use this procedure to scale in a system after deployment.
- [First time you scale in] Check to see if your system can be scaled in:
# tdc-scale-in -d
- Stop the database.
# tpareset –x –y stop for scaling in
- Verify the database is in a DOWN/HARDSTOP state.
# pdestate -a
PDE state: DOWN/HARDSTOPPutting the database in this state may take several minutes.
- Type # tdc-scale-in[node_count] where node_count is the number of nodes to decrease to, and must be less than the current node count.After entering one of the following commands, check log files to determine how long the process will take to finish.The output shows how the configuration changes after the system was scaled in. In the following example, the node count is changed from 4 to 2.
Current Configuration: =========================================================================== Nodes: Node Count: 2 --------------------------------------------------------------------------- CPU(Core)/Mem(GB): CPUs/Node: 16 CPUs Total: 32 Mem/Node: 65 Mem Total: 130 --------------------------------------------------------------------------- AMPs/PEs: AMPs/Node: 24 AMPs Total: 48 PEs/Node: 2 PEs Total: 4 =========================================================================== scale-out (unfold) the current system to [4] nodes: =========================================================================== Nodes: Node Count: 2 => 4 --------------------------------------------------------------------------- CPU(Core)/Mem(GB): CPUs/Node: 16 == 16 CPUs Total: 32 => 64 Mem/Node: 65 == 65 Mem Total: 130 => 260 --------------------------------------------------------------------------- AMPs/PEs: AMPs/Node: 24 => 12 AMPs Total: 48 == 48 PEs/Node: 2 == 2 PEs Total: 4 => 8 =========================================================================== Note: 1. Scaling out (unfolding) a system will INCREASE the node count by provisioning additional instances and other needed resources, including network interfaces, IP addresses. Therefore, the system will COST MORE for both infrastructure and software. 2. The additional IP addresses in the scale out operation will consume additional subnet space. If the subnet this system is operating in does not have enough IP addresses for the new instances being added to the system, this operation will fail. 3. Scaling out a system will NOT INCREASE data storage. The database capacity will NOT be changed after scaling out. 4. Scaling out will boost the overall performance of the system by adding more computation nodes (i.e., CPU and Memory) and increase the total storage bandwidth available to the system by decreasing the data volumes managed per node. 5. A system can always be scaled back (scale in) after scaling out. Continue? [yes/no] yes
- Type yes.When the process completes, the new configuration appears under Current Configuration.
- [Optional] Check the database status.
# pdestate –a
- Bring up the Teradata system configuration to confirm the number of nodes is correct.
# tdinfo
- If you are using Teradata DSC to run jobs, type the following command on all Vantage nodes to update the configuration of the media server.
# /etc/init.d/clienthandler restart-hwupgrade
- [Optional] Check the logs for troubleshooting.While scaling out, if you encounter the following error: Error: Task Error:[Snapshot Pdisk Information] Failed to execute command /usr/pde/bin/psh -sum 0 nvme list -o json. Execution timeout, then turn off cloudwatch_log option using /usr/local/bin/tdc-scale-in <node-count> -a -t --cloudwatch_log=no.