Prerequisite
- Vantage downtime is required, as scaling out reconfigures the database and migrates the premium storage. Schedule a time that minimally impacts users.
- You might need to increase Azure service limits. See Azure Service Limits.
- Make sure each new VM has one available IP and one IP attached to an existing node from a single subnet. To allocate the required network resources, see Configuring COP Entries.
[Base, Advanced, and Enterprise tiers only] Use this procedure to scale out a system after deployment. You will receive an error if you attempt to scale out under one of the following scenarios:
- Beyond the maximum node limit for your current system. See Supported Node Counts.
- Beyond four times (4x) the size of a deployed 24 AMP-per-node system.
- Beyond the minimum storage subpool count (1 per node).
-
Stop the database.
# tpareset –x -f –y stop system -
Run the following command to confirm the database is down.
# psh pdestate -a
PDE state: DOWN/HARDSTOP -
Display all supported node configurations for your current system.
# tdc-scale-out -d
No changes will be applied to the system. -
Run the following command with a specific node count and any optional arguments.
# tdc-scale-out node_count [options]
where:- node_count is the number of nodes to which you want to scale out.
- [options] might include -a, -n, or -p. See Optional Arguments for Scaling Out or Scaling In.
This example shows how your current configuration will change after scaling out the system from 4 to 6 nodes:database001-01:~ # tdc-scale-out 6 -p -a Current Configuration: =========================================================================== Nodes: Node Count: 4 --------------------------------------------------------------------------- CPU(Core)/Mem(GB): CPUs/Node: 20 CPUs Total: 80 Mem/Node: 144 Mem Total: 576 --------------------------------------------------------------------------- AMPs/PEs: AMPs/Node: 12 AMPs Total: 48 PEs/Node: 2 PEs Total: 8 =========================================================================== Current system will be scaled out to [6]: =========================================================================== Nodes: Node Count: 4 => 6 --------------------------------------------------------------------------- CPU(Core)/Mem(GB): CPUs/Node: 20 == 20 CPUs Total: 80 => 120 Mem/Node: 144 == 144 Mem Total: 576 => 864 --------------------------------------------------------------------------- AMPs/PEs: AMPs/Node: 12 => 8 AMPs Total: 48 == 48 PEs/Node: 2 == 2 PEs Total: 8 => 12 =========================================================================== Continue? [yes/no] yes -
Type yes to continue.
This process takes approximately 30 minutes to complete, after which the new configuration appears under Current Configuration.
-
[Optional] Run the following command to confirm the database is running.
# psh pdestate –a
PDE state: RUN/STARTED -
[Optional] Check the diagnostic and troubleshooting logs.
Steps to validate Scaling Operation
PSIM
The following are the steps to validate a successful scale out/in operation in PSIM
- On the PSIM node, run the following command to check if the correct number of nodes are observed.
# psim-ecosystem-state A sample output: # psim-ecosystem-state All 8 Nodes in Contact - OK Database TDLabs - OK : RUN/STARTED -- Summary -- PSIM - 3 nodes total TPA - 2 nodes total Ecosystem - 3 nodes total
- To validate the TPA nodes have the correct IP address, view the following file in PSIM node. A sample file and output is shown here.
cat /var/opt/teradata/psim/config/ecosystem.json { "non_sqle_node_list": [ { "ip_addr": "10.27.01.144" }, { "ip_addr": "10.27.90.115" }, { "ip_addr": "10.27.101.55" } ], "psim_domain_settings": { "serviceconnect": "PROD", "site_id": "ATSTLES4" }, "psim_list": [ { "ip_addr": "10.27.123.34", "type": "psim" }, { "ip_addr": "10.27.141.1", "type": "psim" }, { "ip_addr": "10.27.191.143", "type": "psim" } ], "psim_metadata_version": "1.0", "sqle_instance_list": [ { "database_name": "TDLabs", "node_list": [ { "ip_addr": "10.27.26.21" }, { "ip_addr": "10.27.19.186" } ] }
As the DSC fabric requires manual updates if there are any changes to the media server name, run # dsc config_fabrics -f <file_name>.xml.
- On the PSIM node, run the following command to check if the correct number of nodes are observed.