If you merged multiple QueryGrid Managers into a single cluster and now want to split them into separate systems, you need to first decouple the QueryGrid Managers. This example is separating a development system from a production system.
If the development cluster consists of multiple QueryGrid Manager instances, SSH into one of the instances and perform the following steps to separate:
- Generate a QueryGrid Manager backup file from the development instance.$ /opt/teradata/tdqgm/bin/backup.sh -f /tmp/qgm-backup-mm/dd/yy.zip
The backup file contains both the development and production QueryGrid Manager configurations.
- Stop the QueryGrid Manager service from running by performing a reset on the node and cleaning up the state of the node as in the following example:$ /opt/teradata/tdqgm/bin/reset.sh
Starting reset command, just a moment... A reset will return the QGM back to the state when it was first installed. Are you sure you want to bounce the services and delete all configuration and data? [y/n]: y Stopping QueryGrid Manager... Deleting previous state... Starting QueryGrid Manager... Reset completed successfully.
- Run the migrate command to migrate the configuration objects from production to a local QueryGrid Manager cluster for the development fabric link.
# Run the migrate command with option '-s -l' to migration the systems referenced by the links and skip any configuration objects with the same name. $ /opt/teradata/tdqgm/bin/migrate.sh -s /tmp/qgm-backup-mm/dd/yy.zip -m <External QGM's public address> -l link-names
- Validate the list of configuration objects when prompted.
- At the prompts, select to either migrate the nodes or import the nodes from an instance.Choose migrate nodes for systems that belong only to the development environment and import nodes for the systems that need to be shared between both QueryGrid clusters.
After completion, the development instance is no longer clustered with the production instance.
- If another QueryGrid Manager exists on the development cluster, run /opt/teradata/tdqgm/bin/create-join-cluster-token.sh to capture the output.$ /opt/teradata/tdqgm/bin/create-join-cluster-token.sh
Starting create-join-cluster-token command, just a moment... Join cluster token created. It expires in 24 hours. Join Cluster ID: 690f855a-2dc2-4152-be5e-53984bf8f6f1 Join Cluster Host: 10.25.238.110 Join Cluster Token: 9R3ISjRihTlvV+iE3w+C3YMSSEPyFuu3HbVIuusLreZdWbLEsCdNJhOldpL4MwYzygD7Sb9efnJsaTCfTmaJEQ==
- SSH into the next QueryGrid Manager instance and run the following commands to stop the service from running on the node first followed by cleaning up the state of the instance./opt/teradata/tdqgm/bin/reset.sh/opt/teradata/tdqgm/bin/join-cluster.shBy selecting the token join method and providing the token and IP address captured from the first QueryGrid Manager instance, this instance is now clustered with the first development QueryGrid Manager instance.Wait at least 5 minutes before adding the reset QueryGrid Manager instance to Viewpoint. Any previous Monitored Systems in Viewpoint referencing this QueryGrid Manager must be removed and replaced.
- [Optional] Repeat steps if there are more instances on the development system to join the cluster.
- Log on to Viewpoint and access the production portlet to remove the development objects.
- Under Fabric Components, go to Managers.
- Remove the offline development QueryGrid Manager instances.
- Go to each of the development-only systems and wait for all nodes in the system to display offline before deleting the development system configuration from the production instance.
- Go to each of the development systems that are shared with the production instance and confirm each node displays as online.
- Delete the fabric configuration that is only applicable to the development cluster.
- Delete the data center that is only applicable to the development cluster.