Scale In a System | Teradata® VantageCloud Enterprise on Azure (DIY) - Scaling In a System - Teradata® VantageCloud Enterprise on Azure

VantageCloud Enterprise on Azure (DIY) Installation and Administration Guide - 2.4.6

Product
Teradata® VantageCloud Enterprise on Azure
Release Number
2.4.6
Published
May 2025
ft:locale
en-US
ft:lastEdition
2025-06-04
dita:mapPath
ztq1662440257891.ditamap
dita:ditavalPath
fui1635951983017.ditaval
dita:id
eqk1475705518038
Product Category
Cloud
Prerequisite Vantage downtime is required, as scaling in reconfigures the database and migrates the premium storage. Schedule a time that minimally impacts users.
[Base, Advanced, and Enterprise tiers only] Use this procedure to scale in a system after deployment. If you attempt to scale in under one of the following scenarios, you will receive an error:
  • Beyond the base number of nodes you originally deployed.
  • Beyond the maximum storage subpool count (8 per node).
After scaling in, you will automatically receive a Server Management alert if you provisioned more HSNs than TPA nodes. You can either keep or delete the extra HSNs.
  1. Stop the database.
    # tpareset –x -f –y stop system
  2. Run the following command to confirm the database is down.
    # psh pdestate -a
    PDE state: DOWN/HARDSTOP
  3. Display all supported node configurations for your current system.
    # tdc-scale-in -d
    No changes will be applied to the system.
  4. Run tdc-scale-in node_count [options].
    where:
    This example shows how your current configuration will change after scaling in the system from 6 to 4 nodes:
    database001-01:~ # tdc-scale-in 4 -a -p 
    
    Current Configuration:
    ===========================================================================
     Nodes:
             Node Count:    6
    ---------------------------------------------------------------------------
     CPU(Core)/Mem(GB):
             CPUs/Node:    20             CPUs Total:   120
              Mem/Node:   144              Mem Total:   864
    ---------------------------------------------------------------------------
     AMPs/PEs:
             AMPs/Node:     8             AMPs Total:    48
              PEs/Node:     2              PEs Total:    12 
    ===========================================================================
    
    
    Current system will be scaled out by [6]:
    ===========================================================================
     Nodes:
             Node Count:    6 => 4
    ---------------------------------------------------------------------------
     CPU(Core)/Mem(GB):
              CPUs/Node:   20 == 20       CPUs Total:   120 =>  80
               Mem/Node:  144 == 144       Mem Total:   864 => 576
    ---------------------------------------------------------------------------
     AMPs/PEs:
              AMPs/Node:    8 => 12        AMPs Total:    48 == 48
               PEs/Node:    2 == 2         PEs Total:     12 =>  8
    ===========================================================================
    Continue? [yes/no] yes
  5. Type yes to continue.
    This process takes approximately 30 minutes to complete, after which the new configuration appears under Current Configuration.
  6. [Optional] Run the following command to confirm the database is running.
    # psh pdestate –a
    PDE state: RUN/STARTED
    Steps to validate Scaling Operation

    PSIM

    The following are the steps to validate a successful scale out/in operation in PSIM

    1. On the PSIM node, run the following command to check if the correct number of nodes are observed.
      # psim-ecosystem-state
      
      A sample output:
      # psim-ecosystem-state
      All 8 Nodes in Contact - OK
      Database TDLabs - OK : RUN/STARTED
       
      -- Summary --
       
      PSIM - 3 nodes total
      TPA - 2 nodes total
      Ecosystem - 3 nodes total
      
    2. To validate the TPA nodes have the correct IP address, view the following file in PSIM node. A sample file and output is shown here.
      cat /var/opt/teradata/psim/config/ecosystem.json
      {
          "non_sqle_node_list": [
              {
                  "ip_addr": "10.27.01.144"
              },
              {
                  "ip_addr": "10.27.90.115"
              },
              {
                  "ip_addr": "10.27.101.55"
              }
          ],
          "psim_domain_settings": {
              "serviceconnect": "PROD",
              "site_id": "ATSTLES4"
          },
          "psim_list": [
              {
                  "ip_addr": "10.27.123.34",
                  "type": "psim"
              },
              {
                  "ip_addr": "10.27.141.1",
                  "type": "psim"
              },
              {
                  "ip_addr": "10.27.191.143",
                  "type": "psim"
              }
          ],
          "psim_metadata_version": "1.0",
          "sqle_instance_list": [
              {
                  "database_name": "TDLabs",
                  "node_list": [
                      {
                          "ip_addr": "10.27.26.21"
                      },
                      {
                          "ip_addr": "10.27.19.186"
                      }
                  ]
              }
          
      

    As the DSC fabric requires manual updates if there are any changes to the media server name, run # dsc config_fabrics -f <file_name>.xml.

  7. [Optional] Check the diagnostic and troubleshooting logs.