Data source nodes can be managed by up to three QueryGrid Manager clusters by enabling multiple QueryGrid Manager Cluster support. The tdqg-node service connects to multiple QueryGrid Manager clusters to discover the QueryGrid services to deploy on the nodes. By enabling Multiple QueryGrid Manager Cluster support, an ecosystem can communicate across cluster and data sources without the complications of creating silos between clusters.
- Auto Install – allows adding the current QueryGrid Manager cluster as the new primary or secondary cluster without removing it from existing clusters.
- Import System – allows adding a new QueryGrid Manager cluster as either the primary or secondary cluster by importing the system and nodes from another QueryGrid Manager cluster. Available as a command line tool or the Viewpoint QueryGrid portlet.
One of the main reasons for maintaining multiple clusters is to maintain a development cluster separate from a production cluster. A production or development data source can be part of both the production and development cluster. This allows the development system to be seeded with data from production while still maintaining separate environments. Another reason for having multiple clusters is to keep the internal Full Vantage cluster isolated from the ecosystem deployment of QueryGrid Manager while still allowing SQL Engine to belong to both the Vantage internal cluster and the ecosystem cluster.
When a system is part of multiple QueryGrid Manager clusters, the fabric port numbers and the link names involving that system must be unique across clusters. When QueryGrid Manager detects fabric port or link name conflicts, those conflicts are reported as an issue in the Viewpoint QueryGrid portlet.
When there is a conflict, the fabric or link that has been in existence the longest has precedence. This helps prevent new changes from breaking working functionality.
To resolve a conflict, change the conflicting port number or link name in one of the clusters. Remember to also update the foreign server definition after changing the link name.
All nodes of a system must have the same primary cluster. It is possible for nodes to differ on what the primary cluster is, based on local node settings. When this occurs, an issue is reported in the QueryGrid portlet. The primary cluster is also listed in the node section and node details of the QueryGrid portlet. To set the primary cluster for a system, use the set-primary.sh command.