Prerequisite
Vantage downtime is required, as scaling in reconfigures the database and migrates the premium storage. Schedule a time that minimally impacts users.
[Base, Advanced, and Enterprise tiers only] Use this procedure to scale in a system after deployment. If you attempt to scale in under one of the following scenarios, you will receive an error:
- Beyond the base number of nodes you originally deployed.
- Beyond the maximum storage subpool count (8 per node).
-
Stop the database.
# tpareset –x -f –y stop system -
Run the following command to confirm the database is down.
# psh pdestate -a
PDE state: DOWN/HARDSTOP -
Display all supported node configurations for your current system.
# tdc-scale-in -d
No changes will be applied to the system. -
Run tdc-scale-in node_count [options].
where:
- node_count is the number of nodes to which you want to scale in.
- [options] might include -a, -n, or -p. See Optional Arguments for Scaling Out or Scaling In.
This example shows how your current configuration will change after scaling in the system from 6 to 4 nodes:database001-01:~ # tdc-scale-in 4 -a -p Current Configuration: =========================================================================== Nodes: Node Count: 6 --------------------------------------------------------------------------- CPU(Core)/Mem(GB): CPUs/Node: 20 CPUs Total: 120 Mem/Node: 144 Mem Total: 864 --------------------------------------------------------------------------- AMPs/PEs: AMPs/Node: 8 AMPs Total: 48 PEs/Node: 2 PEs Total: 12 =========================================================================== Current system will be scaled out by [6]: =========================================================================== Nodes: Node Count: 6 => 4 --------------------------------------------------------------------------- CPU(Core)/Mem(GB): CPUs/Node: 20 == 20 CPUs Total: 120 => 80 Mem/Node: 144 == 144 Mem Total: 864 => 576 --------------------------------------------------------------------------- AMPs/PEs: AMPs/Node: 8 => 12 AMPs Total: 48 == 48 PEs/Node: 2 == 2 PEs Total: 12 => 8 =========================================================================== Continue? [yes/no] yes -
Type yes to continue.
This process takes approximately 30 minutes to complete, after which the new configuration appears under Current Configuration.
-
[Optional] Run the following command to confirm the database is running.
# psh pdestate –a
PDE state: RUN/STARTEDSteps to validate Scaling OperationPSIM
The following are the steps to validate a successful scale out/in operation in PSIM
- On the PSIM node, run the following command to check if the correct number of nodes are observed.
# psim-ecosystem-state A sample output: # psim-ecosystem-state All 8 Nodes in Contact - OK Database TDLabs - OK : RUN/STARTED -- Summary -- PSIM - 3 nodes total TPA - 2 nodes total Ecosystem - 3 nodes total
- To validate the TPA nodes have the correct IP address, view the following file in PSIM node. A sample file and output is shown here.
cat /var/opt/teradata/psim/config/ecosystem.json { "non_sqle_node_list": [ { "ip_addr": "10.27.01.144" }, { "ip_addr": "10.27.90.115" }, { "ip_addr": "10.27.101.55" } ], "psim_domain_settings": { "serviceconnect": "PROD", "site_id": "ATSTLES4" }, "psim_list": [ { "ip_addr": "10.27.123.34", "type": "psim" }, { "ip_addr": "10.27.141.1", "type": "psim" }, { "ip_addr": "10.27.191.143", "type": "psim" } ], "psim_metadata_version": "1.0", "sqle_instance_list": [ { "database_name": "TDLabs", "node_list": [ { "ip_addr": "10.27.26.21" }, { "ip_addr": "10.27.19.186" } ] }
As the DSC fabric requires manual updates if there are any changes to the media server name, run # dsc config_fabrics -f <file_name>.xml.
- On the PSIM node, run the following command to check if the correct number of nodes are observed.
- [Optional] Check the diagnostic and troubleshooting logs.