Using Multiple Management Nodes
To use multiple management nodes, first of all, you need to create a config.ini file, which must be the same (completely identical) for all management nodes (there is no checking done on this, but if the files are not identical, you will get a giant mess and a cluster crash). When using multiple management nodes, you have to assign all the IDs for the nodes manually in the configuration file. You cannot rely on auto-assignment due to the fact that you can have multiple management nodes assigning the IDs.
You then need to decide which of your management nodes is going to be primary. For example, say that the primary management node is 10.0.1.0, and the backup is 10.0.1.1. Assume a storage and SQL node each on 10.0.1.2 and 10.0.1.3:
[NDB_MGMD DEFAULT] PortNumber=1186 LogDestination=CONSOLE;SYSLOG:facility=syslog;FILE:filename=/var/log/cluster-log DataDir=/var/lib/mysql-cluster ArbitrationRank=1 # First (PRIMARY) mgm node [NDB_MGMD] Id = 1 Hostname = 10.0.1.0 # Second (BACKUP) mgm node [NDB_MGMD] Id = 2 Hostname = 10.0.1.1 # Storage nodes [NDBD_DEFAULT] DataDir=/var/lib/mysql-cluster [NDBD] Id=3 Hostname=10.0.1.2 [NDBD] Id=4 Hostname=10.0.1.3 #SQL Nodes [MYSQLD] Id=5 Hostname=10.0.1.2 [MYSQLD] Id=6 Hostname=10.0.1.3
Now, on all four servers, you place the following in /etc/my.cnf:
[mysqld] ndbcluster #connectstring: primary,secondary management nodes ndb-connectstring=10.0.0.0,10.0.0.1 [mysql_cluster] ndb-connectstring=id=x,10.0.0.0,10.0.0.1
Notice the id=x in the second connectstring: Make sure you put the correct node ID (as specified in the configuration file) in here.
Now, on the primary management server, you start ndb_mgmd as per the instructions in Chapter1, in the section "Starting a Management Node". Once it is started, you switch to the secondary management server and start ndb_mgmd there exactly as before. Finally, you start the two storage nodes (ndbd) and restart MySQL on the two SQL nodes. You should see something like this:
ndb_mgm> SHOW Connected to Management Server at: 10.0.0.0:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @10.0.0.2 (Version: 5.0.12, Nodegroup: 0) id=4 @10.0.0.3 (Version: 5.0.12, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 2 node(s) id=1 (Version: 5.0.12) id=2 (Version: 5.0.12) [mysqld(API)] 4 node(s) id=5 @10.0.0.2 (Version: 5.0.12) id=6 @10.0.0.3 (Version: 5.0.12)
Note
Note that 10.0.1.0 is the IP address of the primary management daemon. If you use ndb_mgm on any of the four servers (including the server that is running the backup ndb_mgmd daemon), you should still get this output: It should connect to the primary because at this stage the primary is working.
Now, let's see what happens when you unplug the power cable to the primary management daemon, 10.0.0.0. First of all, you should start ndb_mgm on one of the other three server (that is, not the one that you are going to unplug). Then you unplug the server that the primary management daemon is running on. Next, try to run a SHOW command:
ndb_mgm> SHOW Could not get status * 0: No error * Executing: ndb_mgm_disconnect
It appears not to have worked. But this is what you expected; the management client is still trying to talk to the primary management daemon. If you exit and reopen ndb_mgm, it should work:
ndb_mgm> exit [root@s1 mysql-cluster]# ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show; Connected to Management Server at: 10.0.0.1:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @10.0.0.2 (Version: 5.0.12, Nodegroup: 0) id=4 @10.0.0.3 (Version: 5.0.12, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 2 node(s) id=1 (not connected, accepting connect from 10.0.0.0) id=2 @10.0.0.1 (Version: 5.0.12) [mysqld(API)] 4 node(s) id=5 (Version: 5.0.12) id=6 (Version: 5.0.12)
Note
Note that you are now on the IP address of the backup management node.
The NDB and API nodes do not need to be restarted in the event of a primary management daemon failure. They will start communicating with the backup management daemon almost immediately, and they will switch back as and when they need to.