High Availability MySQL on Ubuntu 8.04

July 07, 2008 · 9 min read

In my previous post I showed how to build a high availability web tier using Heartbeat and Apache. That’s great for static pages, but what about dynamic, database-driven sites? How do we protect the database against node failure?

Preparation

You’ll need two boxes and three IP addresses. I’m using virtual machines from Xeriom Networks again. Both are firewalled, with MySQL and Heartbeat ports opened so the servers can talk to each other but nobody else can reach them.

# On db-01
sudo iptables -I INPUT 3 -p tcp --dport mysql -s db-02.vm.xeriom.net -j ACCEPT
sudo iptables -I INPUT 3 -p udp --dport mysql -s db-02.vm.xeriom.net -j ACCEPT
sudo iptables -I INPUT 3 -p udp --dport 694 -s db-02.vm.xeriom.net -j ACCEPT

# On db-02
sudo iptables -I INPUT 3 -p tcp --dport mysql -s db-01.vm.xeriom.net -j ACCEPT
sudo iptables -I INPUT 3 -p udp --dport mysql -s db-01.vm.xeriom.net -j ACCEPT
sudo iptables -I INPUT 3 -p udp --dport 694 -s db-01.vm.xeriom.net -j ACCEPT

Your firewall rules should look something like this. The important lines end in tcp dpt:mysql, udp dpt:mysql, and dpt:694. Each node’s rules should open ports for the other node:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED
ACCEPT     udp  --  db-01                anywhere            udp dpt:694
ACCEPT     tcp  --  db-01                anywhere            udp dpt:mysql
ACCEPT     tcp  --  db-01                anywhere            tcp dpt:mysql
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh

Save your firewall rules so they survive a reboot:

sudo sh -c "iptables-save -c > /etc/iptables.rules"

For this post, assume the following IP addresses are available:

  • 193.219.108.241 – db-01 (db-01.vm.xeriom.net)
  • 193.219.108.242 – db-02 (db-02.vm.xeriom.net)
  • 193.219.108.243 – Not assigned (becomes the floating IP)

Start small

Install and configure MySQL on each box:

sudo apt-get install mysql-server --yes

Set a strong root password during installation. Once it’s done, edit /etc/mysql/my.cnf to make MySQL listen on all interfaces:

bind-address = 0.0.0.0

Restart MySQL and verify it’s running:

sudo /etc/init.d/mysql restart
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> \q

If you got the mysql> prompt, you’re good. Now test cross-node connectivity:

mysql -h db-02.vm.xeriom.net -u root -p
Enter password: [enter the MySQL root password you chose earlier]
ERROR 1130 (00000): Host 'db-01' is not allowed to connect to this MySQL server

That error is actually a good sign – MySQL connected and then refused to authorise the client. We’ll create proper replication accounts shortly. If you get a different error (like Can't connect to MySQL server on 'db-02' (10061)), check that MySQL is running on both boxes and that the firewall rules are correct.

One-way replication

Let’s start with simple master-slave replication. On db-01, edit /etc/mysql/my.cnf and configure the binary log under the replication section:

server-id               = 1
log_bin                 = /var/log/mysql/mysql-bin.log
expire_logs_days        = 10
max_binlog_size         = 100M
binlog_do_db            = my_application
binlog_ignore_db        = mysql
binlog_ignore_db        = test

On db-01, grant replication slave rights to db-02. Use a real, strong password in place of some_password:

mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> grant replication slave on *.* to 'replication'@'db-02.vm.xeriom.net' identified by 'some_password';
mysql> \q
sudo /etc/init.d/mysql restart

On db-02, configure it to replicate from db-01 by editing /etc/mysql/my.cnf:

server-id                 = 2
master-host               = db-01.vm.xeriom.net
master-user               = replication
master-password           = some_password
master-port               = 3306

Restart MySQL on db-02 and check the slave status. If Slave_IO_State says “Waiting for master to send event”, you’re in business:

# Run this on db-02 only
sudo /etc/init.d/mysql restart
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> show slave status \G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 193.219.108.241
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000005
        Read_Master_Log_Pos: 98
             Relay_Log_File: mysqld-relay-bin.000004
              Relay_Log_Pos: 235
      Relay_Master_Log_File: mysql-bin.000005
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB:
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 98
            Relay_Log_Space: 235
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: 0
1 row in set (0.00 sec)

Now let’s prove it works. Create the my_application database on db-01 and watch it appear on db-02:

# On both nodes
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> show databases;

You should see mysql and test.

# On db-01 only
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> create database my_application;
# On both nodes
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> show databases;

The my_application database should now appear on both nodes. If it doesn’t (it didn’t for me the first time), read on.

Troubleshooting one-way replication

If the slave status doesn’t show Slave_IO_State: Waiting for master to send event, Slave_IO_Running: Yes, and Slave_SQL_Running: Yes, something is off.

Telnet is brilliant for debugging connectivity issues. Install it if you haven’t already:

sudo apt-get install telnet

SSH to db-02 and telnet to db-01 on the MySQL port:

# On db-02
telnet db-01.vm.xeriom.net mysql

The problem I hit was ERROR 1130 (00000): Host 'db-02' is not allowed to connect to this MySQL server. This happens when you used the full hostname (db-02.vm.xeriom.net) in the grant statement but MySQL resolved the connecting host to a short name (db-02) via /etc/hosts. Run the grant again using whatever hostname appears in the error message:

# On db-01
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> grant replication slave on *.* to 'replication'@'db-02' identified by 'some_password';
mysql> \q
sudo /etc/init.d/mysql restart

Another gotcha: if the slave status stays at “connecting to master” for a long time and telnet works fine, you probably have the same server-id on both servers. Check /etc/mysql/my.cnf, fix the values, and restart MySQL.

Master-master replication

One-way replication protects your data, but if you accidentally write to the slave (db-02), at best the databases will be inconsistent – and at worst, replication will break entirely.

Setting up replication in both directions gives you a consistent dataset on both nodes, regardless of which one receives writes.

On db-02, edit /etc/mysql/my.cnf to enable the binary log:

log_bin                 = /var/log/mysql/mysql-bin.log
expire_logs_days        = 10
max_binlog_size         = 100M
binlog_do_db            = my_application
binlog_ignore_db        = mysql
binlog_ignore_db        = test

Grant replication slave privileges on db-02 for the replication user on db-01:

# On db-02
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> grant replication slave on *.* to 'replication'@'db-01.vm.xeriom.net' identified by 'some_password';

On db-01, edit /etc/mysql/my.cnf to replicate from db-02:

master-host               = db-02.vm.xeriom.net
master-user               = replication
master-password           = some_password
master-port               = 3306

Restart MySQL on both boxes and check the slave status on each. Both should report Slave_IO_State: Waiting for master to send event, Slave_IO_Running: Yes, and Slave_SQL_Running: Yes. If not, work through the troubleshooting section above.

sudo /etc/init.d/mysql restart
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> show slave status \G

If you’ve got this far, your database is now a master-master cluster. Sweet, sweet redundancy.

Heartbeat

The data is replicated both ways, so your data is safe if a node goes down. But applications still need to know which host to connect to, and right now failover would have to be handled by the application itself.

I wrote previously about using Heartbeat for high availability Apache. We’ll use the same technique here: a floating IP address that Heartbeat moves to whichever database node is alive. Applications connect to this IP, and Heartbeat makes sure it always points at a live server. Since both databases replicate from each other, it doesn’t matter which node gets the traffic.

Install Heartbeat on both boxes:

sudo apt-get install heartbeat

Copy the sample configuration files:

sudo cp /usr/share/doc/heartbeat/authkeys /etc/ha.d/
sudo sh -c "zcat /usr/share/doc/heartbeat/ha.cf.gz > /etc/ha.d/ha.cf"
sudo sh -c "zcat /usr/share/doc/heartbeat/haresources.gz > /etc/ha.d/haresources"

Lock down authkeys:

sudo chmod go-wrx /etc/ha.d/authkeys

Edit /etc/ha.d/authkeys and add a password:

auth 2
2 sha1 your-password-here

Configure ha.cf for your network. Node names must match the output of uname -n:

logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node db-01.vm.xeriom.net
node db-02.vm.xeriom.net

Edit haresources to assign the floating IP. This file must be identical on both nodes, with the hostname matching uname -n on db-01:

db-01.vm.xeriom.net 193.219.108.243

Start Heartbeat on db-01, then db-02:

sudo /etc/init.d/heartbeat start

This takes a while to start. Watch progress with tail -f /var/log/ha-log. Eventually db-01 should report:

heartbeat[7734]: 2008/07/07_17:19:34 info: Initial resource acquisition complete (T_RESOURCES(us))
IPaddr[7739]:   2008/07/07_17:19:37 INFO:  Running OK
heartbeat[7745]: 2008/07/07_17:19:37 info: Local Resource acquisition completed.

Testing it all

Until now, both database boxes only allowed MySQL connections from each other. To verify failover, we need to connect from an external machine. Find the public IP of your test box (here it’s 193.214.108.10) and open access on both database boxes:

# On both boxes
sudo iptables -I INPUT 3 -p tcp --dport mysql -s 193.214.108.10 -d 193.214.108.243 -j ACCEPT

Create a test user on both boxes:

# On both boxes
mysql -u root -p
Enter password: [enter the MySQL root password you chose earlier]
mysql> grant all, replication_client on my_application.* to 'some_user'@'193.214.108.10' identified by 'some_other_password';
mysql> \q

Now connect to the floating IP from your test box:

mysql -u some_user -p -h 193.214.108.243 my_application
mysql> show slave status \G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 193.219.108.242
[unimportant lines snipped]

Note the master host is db-02. Stop Heartbeat (or shut down db-01) and run the query again – you should see the master has changed to the other node’s IP.

Bring db-01 back up and query once more. The master host should be back to what it was originally.

Auto-increment offsets

To avoid problems if replication fails, check out avoiding auto_increment collision.

These posts are LLM-aided. Backbone, original writing, and structure by Craig. Research and editing by Craig + LLM. Proof-reading by Craig.