Saturday, April 18, 2009

How To Set Up A Load-Balanced MySQL Cluster

How To Set Up A Load-Balanced MySQL Cluster

Version 1.0
Author: Falko Timme
Last edited 03/27/2006


This tutorial shows how to configure a MySQL 5 cluster with three nodes: two storage nodes and one management node. This cluster is load-balanced by a high-availability load balancer that in fact has two nodes that use the Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster).

In this document I use Debian Sarge for all nodes. Therefore the setup might differ a bit for other distributions. The MySQL version I use in this setup is 5.0.19. If you do not want to use MySQL 5, you can use MySQL 4.1 as well, although I haven't tested it.

This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.

This document comes without warranty of any kind! I want to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!


1 My Servers
I use the following Debian servers that are all in the same network (192.168.0.x in this example):

sql1.example.com: 192.168.0.101 MySQL cluster node 1
sql2.example.com: 192.168.0.102 MySQL cluster node 2
loadb1.example.com: 192.168.0.103 Load Balancer 1 / MySQL cluster management server
loadb2.example.com: 192.168.0.104 Load Balancer 2
In addition to that we need a virtual IP address : 192.168.0.105. It will be assigned to the MySQL cluster by the load balancer so that applications have a single IP address to access the cluster.

Although we want to have two MySQL cluster nodes in our MySQL cluster, we still need a third node, the MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails, and the management server is not running, then the data on the two cluster nodes will become inconsistent ("split brain"). We also need it for configuring the MySQL cluster.

So normally we would need five machines for our setup:

2 MySQL cluster nodes + 1 cluster management server + 2 Load Balancers = 5

As the MySQL cluster management server does not use many resources, and the system would just sit there doing nothing, we can put our first load balancer on the same machine, which saves us one machine, so we end up with four machines.


2 Set Up The MySQL Cluster Management Server
First we have to download MySQL 5.0.19 (the max version!) and install the cluster management server (ndb_mgmd) and the cluster management client (ndb_mgm - it can be used to monitor what's going on in the cluster). The following steps are carried out on loadb1.example.com (192.168.0.103):

loadb1.example.com:

mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-\
glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/
tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz
cd mysql-max-5.0.19-linux-i686-glibc23
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm

Next, we must create the cluster configuration file, /var/lib/mysql-cluster/config.ini:

loadb1.example.com:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi config.ini

[NDBD DEFAULT]NoOfReplicas=2[MYSQLD DEFAULT][NDB_MGMD DEFAULT][TCP DEFAULT]# Section for the cluster management node[NDB_MGMD]# IP address of the management node (this system)HostName=192.168.0.103# Section for the storage nodes[NDBD]# IP address of the first storage nodeHostName=192.168.0.101DataDir= /var/lib/mysql-cluster[NDBD]# IP address of the second storage nodeHostName=192.168.0.102DataDir=/var/lib/mysql-cluster# one [MYSQLD] per storage node[MYSQLD][MYSQLD]

Please replace the IP addresses in the file appropriately.

Then we start the cluster management server:

loadb1.example.com:

ndb_mgmd -f /var/lib/mysql-cluster/config.ini

It makes sense to automatically start the management server at system boot time, so we create a very simple init script and the appropriate startup links:

loadb1.example.com:

echo 'ndb_mgmd -f /var/lib/mysql-cluster/config.ini' > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ndb_mgmd
update-rc.d ndb_mgmd defaults

3 Set Up The MySQL Cluster Nodes (Storage Nodes)
Now we install mysql-max-5.0.19 on both sql1.example.com and sql2.example.com:

sql1.example.com / sql2.example.com:

groupadd mysql
useradd -g mysql mysql
cd /usr/local/
wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-\
glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/
tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz
ln -s mysql-max-5.0.19-linux-i686-glibc23 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
update-rc.d mysql.server defaults
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin

Then we create the MySQL configuration file /etc/my.cnf on both nodes:

sql1.example.com / sql2.example.com:

vi /etc/my.cnf

[mysqld]ndbcluster# IP address of the cluster management nodendb-connectstring=192.168.0.103[mysql_cluster]# IP address of the cluster management nodendb-connectstring=192.168.0.103

Make sure you fill in the correct IP address of the MySQL cluster management server.

Next we create the data directories and start the MySQL server on both cluster nodes:

sql1.example.com / sql2.example.com:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start

(Please note: we have to run ndbd --initial only when the start MySQL for the first time, and if /var/lib/mysql-cluster/config.ini on loadb1.example.com changes.)

Now is a good time to set a password for the MySQL root user:

sql1.example.com / sql2.example.com:

mysqladmin -u root password yourrootsqlpassword

We want to start the cluster nodes at boot time, so we create an ndbd init script and the appropriate system startup links:

sql1.example.com / sql2.example.com:

echo 'ndbd' > /etc/init.d/ndbd
chmod 755 /etc/init.d/ndbd
update-rc.d ndbd defaults

4 Test The MySQL Cluster
Our MySQL cluster configuration is already finished, now it's time to test it. On the cluster management server (loadb1.example.com), run the cluster management client ndb_mgm to check if the cluster nodes are connected:

loadb1.example.com:

ndb_mgm

You should see this:

-- NDB Cluster -- Management Client --ndb_mgm>

Now type show; at the command prompt:

show;

The output should be like this:

ndb_mgm> show;Connected to Management Server at: localhost:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=2 @192.168.0.101 (Version: 5.0.19, Nodegroup: 0, Master)id=3 @192.168.0.102 (Version: 5.0.19, Nodegroup: 0)[ndb_mgmd(MGM)] 1 node(s)id=1 @192.168.0.103 (Version: 5.0.19)[mysqld(API)] 2 node(s)id=4 @192.168.0.101 (Version: 5.0.19)id=5 @192.168.0.102 (Version: 5.0.19)ndb_mgm>

If you see that your nodes are connected, then everything's ok!

Type

quit;

to leave the ndb_mgm client console.

Now we create a test database with a test table and some data on sql1.example.com:

sql1.example.com:

mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
quit;

(Have a look at the CREATE statment: We must use ENGINE=NDBCLUSTER for all database tables that we want to get clustered! If you use another engine, then clustering will not work!)

The result of the SELECT statement should be:

mysql> SELECT * FROM testtable;+------+| i |+------+| 1 |+------+1 row in set (0.03 sec)

Now we create the same database on sql2.example.com (yes, we still have to create it, but afterwards testtable and its data should be replicated to sql2.example.com because testtable uses ENGINE=NDBCLUSTER):

sql2.example.com:

mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
SELECT * FROM testtable;

The SELECT statement should deliver you the same result as before on sql1.example.com:

mysql> SELECT * FROM testtable;+------+| i |+------+| 1 |+------+1 row in set (0.04 sec)

So the data was replicated from sql1.example.com to sql2.example.com. Now we insert another row into testtable:

sql2.example.com:

INSERT INTO testtable () VALUES (2);
quit;

Now let's go back to sql1.example.com and check if we see the new row there:

sql1.example.com:

mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;

You should see something like this:

mysql> SELECT * FROM testtable;+------+| i |+------+| 1 || 2 |+------+2 rows in set (0.05 sec)

So both MySQL cluster nodes alwas have the same data!

Now let's see what happens if we stop node 1 (sql1.example.com): Run

sql1.example.com:

killall ndbd

and check with

ps aux | grep ndbd | grep -iv grep

that all ndbd processes have terminated. If you still see ndbd processes, run another

killall ndbd

until all ndbd processes are gone.

Now let's check the cluster status on our management server (loadb1.example.com):

loadb1.example.com:

ndb_mgm

On the ndb_mgm console, issue

show;

and you should see this:

ndb_mgm> show;Connected to Management Server at: localhost:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=2 (not connected, accepting connect from 192.168.0.101)id=3 @192.168.0.102 (Version: 5.0.19, Nodegroup: 0, Master)[ndb_mgmd(MGM)] 1 node(s)id=1 @192.168.0.103 (Version: 5.0.19)[mysqld(API)] 2 node(s)id=4 @192.168.0.101 (Version: 5.0.19)id=5 @192.168.0.102 (Version: 5.0.19)ndb_mgm>

You see, sql1.example.com is not connected anymore.

Type

quit;

to leave the ndb_mgm console.

Let's check sql2.example.com:

sql2.example.com:

mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;

The result of the SELECT query should still be

mysql> SELECT * FROM testtable;+------+| i |+------+| 1 || 2 |+------+2 rows in set (0.17 sec)

Ok, all tests went fine, so let's start our sql1.example.com node again:

sql1.example.com:

ndbd

5 How To Restart The Cluster
Now let's asume you want to restart the MySQL cluster, for example because you have changed /var/lib/mysql-cluster/config.ini on loadb1.example.com or for some other reason. To do this, you use the ndb_mgm cluster management client on loadb1.example.com:

loadb1.example.com:

ndb_mgm

On the ndb_mgm console, you type

shutdown;

You will then see something like this:

ndb_mgm> shutdown;Node 3: Cluster shutdown initiatedNode 2: Node shutdown completed.2 NDB Cluster node(s) have shutdown.NDB Cluster management server shutdown.ndb_mgm>

This means that the cluster nodes sql1.example.com and sql2.example.com and also the cluster management server have shut down.

Run

quit;

to leave the ndb_mgm console.

To start the cluster management server, do this on loadb1.example.com:

loadb1.example.com:

ndb_mgmd -f /var/lib/mysql-cluster/config.ini

and on sql1.example.com and sql2.example.com you run

sql1.example.com / sql2.example.com:

ndbd

or, if you have changed /var/lib/mysql-cluster/config.ini on loadb1.example.com:

ndbd --initial

Afterwards, you can check on loadb1.example.com if the cluster has restarted:

loadb1.example.com:

ndb_mgm

On the ndb_mgm console, type

show;

to see the current status of the cluster. It might take a few seconds after a restart until all nodes are reported as connected.

Type

quit;

to leave the ndb_mgm console.

6 Configure The Load Balancers
Our MySQL cluster is finished now, and you could start using it now. However, we don't have a single IP address that we can use to access the cluster, which means you must configure your applications in a way that a part of it uses the MySQL cluster node 1 (sql1.example.com), and the rest uses the other node (sql2.example.com). Of course, all your applications could just use one node, but what's the point then in having a cluster if you do not split up the load between the cluster nodes? Another problem is, what happens if one of the cluster nodes fails? Then the applications that use this cluster node cannot work anymore.

The solution is to have a load balancer in front of the MySQL cluster which (as its name suggests) balances the load between the MySQL cluster nodes. The load blanacer configures a virtual IP address that is shared between the cluster nodes, and all your applications use this virtual IP address to access the cluster. If one of the nodes fails, then your applications will still work, because the load balancer redirects the requests to the working node.

Now in this scenario the load balancer becomes the bottleneck. What happens if the load balancer fails? Therefore we will configure two load balancers (loadb1.example.com and loadb2.example.com) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. Both load balancers use heartbeat to check if the other load balancer is still alive, and both load balancers also use ldirectord, the actual load balancer the splits up the load onto the cluster nodes. heartbeat and ldirectord are provided by the Ultra Monkey package that we will install.

It is important that loadb1.example.com and loadb2.example.com have support for IPVS (IP Virtual Server) in their kernels. IPVS implements transport-layer load balancing inside the Linux kernel.




6.1 Install Ultra Monkey
Ok, let's start: first we enable IPVS on loadb1.example.com and loadb2.example.com:

loadb1.example.com / loadb2.example.com:

modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr

In order to load the IPVS kernel modules at boot time, we list the modules in /etc/modules:

loadb1.example.com / loadb2.example.com:

vi /etc/modules

ip_vs_dhip_vs_ftpip_vsip_vs_lblcip_vs_lblcrip_vs_lcip_vs_nqip_vs_rrip_vs_sedip_vs_ship_vs_wlcip_vs_wrr

Now we edit /etc/apt/sources.list and add the Ultra Monkey repositories (don't remove the other repositories), and then we install Ultra Monkey:

loadb1.example.com / loadb2.example.com:

vi /etc/apt/sources.list

deb http://www.ultramonkey.org/download/3/ sarge maindeb-src http://www.ultramonkey.org/download/3 sarge main

apt-get update
apt-get install ultramonkey libdbi-perl libdbd-mysql-perl libmysqlclient14-dev

Now Ultra Monkey is being installed. If you see this warning:

¦ libsensors3 not functional ¦ ¦ ¦ ¦ It appears that your kernel is not compiled with sensors support. As a ¦ ¦ result, libsensors3 will not be functional on your system. ¦ ¦ ¦ ¦ If you want to enable it, have a look at "I2C Hardware Sensors Chip ¦ ¦ support" in your kernel configuration. ¦

you can ignore it.

Answer the following questions:

Do you want to automatically load IPVS rules on boot?
<-- No

Select a daemon method.
<-- none

The libdbd-mysql-perl package we've just installed does not work with MySQL 5 (we use MySQL 5 on our MySQL cluster...), so we install the newest DBD::mysql Perl package:

loadb1.example.com / loadb2.example.com:

cd /tmp
wget http://search.cpan.org/CPAN/authors/id/C/CA/CAPTTOFU/DBD-mysql-3.0002.tar.gz
tar xvfz DBD-mysql-3.0002.tar.gz
cd DBD-mysql-3.0002
perl Makefile.PL
make
make install

We must enable packet forwarding:

loadb1.example.com / loadb2.example.com:

vi /etc/sysctl.conf

# Enables packet forwardingnet.ipv4.ip_forward = 1

sysctl -p

6.2 Configure heartbeat
Next we configure heartbeat by creating three files (all three files must be identical on loadb1.example.com and loadb2.example.com):

loadb1.example.com / loadb2.example.com:

vi /etc/ha.d/ha.cf

logfacility local0bcast eth0mcast eth0 225.0.0.1 694 1 0auto_failback offnode loadb1node loadb2respawn hacluster /usr/lib/heartbeat/ipfailapiauth ipfail gid=haclient uid=hacluster

Please note: you must list the node names (in this case loadb1 and loadb2) as shown by

uname -n

Other than that, you don't have to change anything in the file.

vi /etc/ha.d/haresources

loadb1 \ ldirectord::ldirectord.cf \ LVSSyncDaemonSwap::master \ IPaddr2::192.168.0.105/24/eth0/192.168.0.255

You must list one of the load balancer node names (here: loadb1) and list the virtual IP address (192.168.0.105) together with the correct netmask (24) and broadcast address (192.168.0.255). If you are unsure about the correct settings, http://www.subnetmask.info/ might help you.

vi /etc/ha.d/authkeys

auth 33 md5 somerandomstring

somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.

/etc/ha.d/authkeys should be readable by root only, therefore we do this:

loadb1.example.com / loadb2.example.com:

chmod 600 /etc/ha.d/authkeys




6.3 Configure ldirectord
Now we create the configuration file for ldirectord, the load balancer:

loadb1.example.com / loadb2.example.com:

vi /etc/ha.d/ldirectord.cf

# Global Directiveschecktimeout=10checkinterval=2autoreload=nologfile="local0"quiescent=yesvirtual = 192.168.0.105:3306 service = mysql real = 192.168.0.101:3306 gate real = 192.168.0.102:3306 gate checktype = negotiate login = "ldirector" passwd = "ldirectorpassword" database = "ldirectordb" request = "SELECT * FROM connectioncheck" scheduler = wrr

Please fill in the correct virtual IP address (192.168.0.105) and the correct IP addresses of your MySQL cluster nodes (192.168.0.101 and 192.168.0.102). 3306 is the port that MySQL runs on by default. We also specify a MySQL user (ldirector) and password (ldirectorpassword), a database (ldirectordb) and an SQL query. ldirectord uses this information to make test requests to the MySQL cluster nodes to check if they are still available. We are going to create the ldirector database with the ldirector user in the next step.

Now we create the necessary system startup links for heartbeat and remove those of ldirectord (bacause ldirectord will be started by heartbeat):

loadb1.example.com / loadb2.example.com:

update-rc.d -f heartbeat remove
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove

6.4 Create A Database Called ldirector
Next we create the ldirector database on our MySQL cluster nodes sql1.example.com and sql2.example.com. This database will be used by our load balancers to check the availability of the MySQL cluster nodes.

sql1.example.com:

mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
USE ldirectordb;
CREATE TABLE connectioncheck (i INT) ENGINE=NDBCLUSTER;
INSERT INTO connectioncheck () VALUES (1);
quit;

sql2.example.com:

mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
quit;




6.5 Prepare The MySQL Cluster Nodes For Load Balancing
Finally we must configure our MySQL cluster nodes sql1.example.com and sql2.example.com to accept requests on the virtual IP address 192.168.0.105.

sql1.example.com / sql2.example.com:

apt-get install iproute

Add the following to /etc/sysctl.conf:

sql1.example.com / sql2.example.com:

vi /etc/sysctl.conf

# Enable configuration of arp_ignore optionnet.ipv4.conf.all.arp_ignore = 1# When an arp request is received on eth0, only respond if that address is# configured on eth0. In particular, do not respond if the address is# configured on lonet.ipv4.conf.eth0.arp_ignore = 1# Ditto for eth1, add for all ARPing interfaces#net.ipv4.conf.eth1.arp_ignore = 1# Enable configuration of arp_announce optionnet.ipv4.conf.all.arp_announce = 2# When making an ARP request sent through eth0 Always use an address that# is configured on eth0 as the source address of the ARP request. If this# is not set, and packets are being sent out eth0 for an address that is on# lo, and an arp request is required, then the address on lo will be used.# As the source IP address of arp requests is entered into the ARP cache on# the destination, it has the effect of announcing this address. This is# not desirable in this case as adresses on lo on the real-servers should# be announced only by the linux-director.net.ipv4.conf.eth0.arp_announce = 2# Ditto for eth1, add for all ARPing interfaces#net.ipv4.conf.eth1.arp_announce = 2

sysctl -p

Add this section for the virtual IP address to /etc/network/interfaces:

sql1.example.com / sql2.example.com:

vi /etc/network/interfaces

auto lo:0iface lo:0 inet static address 192.168.0.105 netmask 255.255.255.255 pre-up sysctl -p > /dev/null

ifup lo:0

7 Start The Load Balancer And Do Some Testing
Now we can start our two load balancers for the first time:

loadb1.example.com / loadb2.example.com:

/etc/init.d/ldirectord stop
/etc/init.d/heartbeat start

If you don't see errors, you should now reboot both load balancers:

loadb1.example.com / loadb2.example.com:

shutdown -r now

After the reboot we can check if both load balancers work as expected :

loadb1.example.com / loadb2.example.com:

ip addr sh eth0

The active load balancer should list the virtual IP address (192.168.0.105):

2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:45:fc:f8 brd ff:ff:ff:ff:ff:ff inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0 inet 192.168.0.105/24 brd 192.168.0.255 scope global secondary eth0

The hot-standby should show this:

2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:16:c1:4e brd ff:ff:ff:ff:ff:ff inet 192.168.0.104/24 brd 192.168.0.255 scope global eth0

loadb1.example.com / loadb2.example.com:

ldirectord ldirectord.cf status

Output on the active load balancer:

ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 1603

Output on the hot-standby:

ldirectord is stopped for /etc/ha.d/ldirectord.cf

loadb1.example.com / loadb2.example.com:

ipvsadm -L -n

Output on the active load balancer:

IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.0.105:3306 wrr -> 192.168.0.101:3306 Route 1 0 0 -> 192.168.0.102:3306 Route 1 0 0

Output on the hot-standby:

IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn

loadb1.example.com / loadb2.example.com:

/etc/ha.d/resource.d/LVSSyncDaemonSwap master status

Output on the active load balancer:

master running(ipvs_syncmaster pid: 1766)

Output on the hot-standby:

master stopped(ipvs_syncbackup pid: 1440)

If your tests went fine, you can now try to access the MySQL database from a totally different server in the same network (192.168.0.x) using the virtual IP address 192.168.0.105:

mysql -h 192.168.0.105 -u ldirector -p

(Please note: your MySQL client must at least be of version 4.1; older versions do not work with MySQL 5.)

You can now switch off one of the MySQL cluster nodes for test purposes; you should then still be able to connect to the MySQL database.




8 Annotations
There are some important things to keep in mind when running a MySQL cluster:

- All data is stored in RAM! Therefore you need lots of RAM on your cluster nodes. The formula how much RAM you need on ech node goes like this:

(SizeofDatabase × NumberOfReplicas × 1.1 ) / NumberOfDataNodes

So if you have a database that is 1 GB of size, you would need 1.1 GB RAM on each node!

- The cluster management node listens on port 1186, and anyone can connect. So that's definitely not secure, and therefore you should run your cluster in an isolated private network!

It's a good idea to have a look at the MySQL Cluster FAQ: http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-faq.html and also at the MySQL Cluster documentation: http://dev.mysql.com/doc/refman/5.0/en/ndbcluster.html




Links
MySQL: http://www.mysql.com/

MySQL Cluster documentation: http://dev.mysql.com/doc/refman/5.0/en/ndbcluster.html

MySQL Cluster FAQ: http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-faq.html

Ultra Monkey: http://www.ultramonkey.org/

The High-Availability Linux Project: http://www.linux-ha.org/

A very helpful article
Submitted by lionel (not registered) on Mon, 2009-04-13 15:24.
Found this article very helpful. Followed it step by step and found no problems at all. Used to think mysql replication to be a big deal but this article made it look so simple. Thanks Guys

Cheers


Lionel

Developer at Shopnics

reply | view as pdf
A few more details
Submitted by d60eba (registered user) on Thu, 2008-01-10 12:11.
This is a great tutorial - I used it to set everything up myself. However, I found a few details lacking about what was going on behind the scenes, and also how to recover after a server crash. Anyway, I've written everything up here: http://aciddrop.com/2008/01/10/step-by-step-how-to-setup-mysql-database-replication/ It's for MySQL 5.0 on Centos 4, but is good for other distros. (I've credited you with a link at the bottom).

Cheers,

Leon

reply | view as pdf
locking/unlocking tables warning
Submitted by taikonautzero (registered user) on Mon, 2006-12-18 13:57.
When locking the tables with FLUSH TABLES WITH READ LOCK; do not quit the mysql shell else you will lose the lock, use another shell to do the db dump instead.

FreeBSD version 4.0.26


MySQL version 4.10

reply | view as pdf
master details in slave my.cnf not explicitly needed
Submitted by Anonymous (not registered) on Tue, 2006-01-17 00:29.
Nice and concice howto (too bad I only found it now while I figured this stuff out a month ago)! I'd like to remark that when editing the slave my.cnf config file, the master details do not necessarilly have to be filled in there, since the slave mysql server doesn't read this config when restarting. When issuing the CHANGE MASTER TO command, the slave mysql server creates a master.info file in the mysql data directory where it stores the master details, along with it's current synchronization position. The master.info file contains thus everything that the slave needs when restarting. Reason for not having the master details (including slave_user password) in the my.cnf config file is by default world readable (at least in debian GNU/Linux it is) and the master.info can only be read by mysql (and root, obviously). Furthermore I'd like to point out another (fast) way of getting the master data to the slave when setting up replication: with the read lock still on (don't close the mysql client in which you issued the FLUSH TABLES WITH READ LOCK command, otherwise the lock will be gone); create a tarball of the entire mysql data directory (or only the desired databases, the filenames to include are obvious), release the lock, copy the tarball to the slave machine using ftp, scp or whatever, change to the data directory overthere and extract the tarball. Take special care when copying the mysql database to the slave this way, because you will override any existing account on the slave. In debian, dpkg-reconfigure mysql-server might be needed to resolve problems with the debian-sys-maint-user that arise when replacing the entire mysql database. Good luck, Thomas
reply | view as pdf
debian-sys-maint user
Submitted by Anonymous (not registered) on Fri, 2006-02-03 23:39.
When copying everything, including the mysql database on Debian, you will screw up the password for the debian-sys-main account. Just grab the password on your master server from /etc/mysql/debian.cnf and put this password in the same file on your new slave.

The issue I've faced:ERROR
Submitted by Valery (not registered) on Thu, 2008-12-11 09:28.
The issue I've faced:

ERROR 1218 (08S01): Error connecting to master: Master is not configured


In my case it was wrong master configuration placement in my.conf. Don't put it in the end of file. Place it somewhere in the middle ;)




reply | view as pdf
remark
Submitted by Jason (not registered) on Fri, 2008-11-21 15:42.
putting the settings in the my.cnf file and doing:

CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave_user', MASTER_PASSWORD='', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;

in the mysql command line does the same. so only setting the my.cnf and restarting mysql does the trick.


reply | view as pdf
Won't work on newer MYSQL versions
Submitted by xabin (registered user) on Wed, 2008-03-12 19:25.
The syntax 'LOAD DATA FROM MASTER' will not work on the newer versions of MySQL-server, see this page; http://dev.mysql.com/doc/refman/5.0/en/load-data-from-master.html

reply | view as pdf
another thing that will help here
Submitted by nephish (registered user) on Wed, 2007-05-02 00:00.
if you start mysql with /etc/init.d/mysql restart, you cannot just put the lines above for the my.cnf file anywhere in the file for the slave.

they must be in the [mysqld] block.

took me two hours to figgure that one out.







reply | view as pdf
keep in mind this note on r
Submitted by Anonymous (not registered) on Tue, 2006-06-06 11:00.
keep in mind this note on replication (found on mysql doc site), I lost 2 days trying to understand why my DBs where not replicating!

Note that if you client does not do a "USE
dbname", binlog-do-db=dbname will not binlog a
query like: "update in dbname.foobar set foo=1"

You explicitly have to do a USE before a query in
order to have your query binlogged, it looks
like. Replication on the slave side can do
wildcard matches .. but the master cannot (a la
binlog-wild-do-table=dbname.%). So make sure your
clients do a use, if you plan to replicate those
tables it updates.

reply | view as pdf
Missing a few options, more detailed commands for slave
Submitted by Anonymous (not registered) on Mon, 2006-01-16 19:02.
http://dev.mysql.com/doc/refman/4.1/en/replication-howto.html

You need to read the database replication documents a little more in depth; depending on your version of MySQL and wether or not you use InnoDB, you want to also include something like the following on the master server:


innodb_flush_log_at_trx_commit = 1
innodb_safe_binlog
sync-binlog = 1
log-bin = /path/to/mysql/data/master-bin
log-bin-index = /path/to/mysql/data/master-bin.index


On the slave, you want to enable (usually) read-only behaviour to revent accidental commits as well as the relay logs:


read-only
relay-log = /path/to/mysql/data/slave-relay-bin
relay-log-index = /path/to/mysql/data/slave-relay-bin.index


A better method to ensure exact replication would go like the below:


1) start master

2) grant replication to slave

3) run "show master status" and record the index file and it's offset position

4) start the slave

5) run "stop slave"

6) run "change master to master_host='[MASTER HOST IP]', master_user='[USER]', master_password='[PASSWORD]', master_log_file='[NAME]', master_log_pos=[POSITION]" (from #3)

7) run "start slave"


Now go ahead and set your root password, create your databases, etc. Everything done on the master will replicate faithfully over to the slave.




reply | view as pdf
Little tip
Submitted by Anonymous (not registered) on Mon, 2006-01-16 17:26.
I've done the exact same thing a while ago. I wrote a little perlscript to keep the databases in sync (when run it makes sure they're synced). Some people might find it helpful. http://files.printf.dk/software/clustersync.txt

No comments: