Load-Balanced, Highly Available MySQL on Ubuntu 8.04

July 20, 2008

If you followed my previous post about [high availability MySQL](http://barkingiguana.com/2008/07/07/high-availability-mysql-on-ubuntu-804), your application now has one less single point of failure. But what happens when the cluster starts getting overloaded? By load-balancing MySQL connections across hosts, you can handle a larger volume of queries without breaking a sweat.
A load balanced database cluster
## Requirements This article builds on the MySQL cluster from [my previous post](http://barkingiguana.com/2008/07/07/high-availability-mysql-on-ubuntu-804). Set that up first if you haven't already. You'll also need two more virtual machines, each with one IP address: * 193.219.108.239 -- lb-db-01 (lb-db-01.vm.xeriom.net) * 193.219.108.240 -- lb-db-02 (lb-db-02.vm.xeriom.net) * \* 193.219.108.241 -- db-01 (db-01.vm.xeriom.net) * \* 193.219.108.242 -- db-02 (db-02.vm.xeriom.net) * \* 193.219.108.243 -- virtual IP address IP addresses marked with \* are carried over from the previous article. All boxes have been [firewalled](http://barkingiguana.com/2008/06/22/firewall-a-pristine-ubuntu-804-box). That's just plain common sense. ## We have the technology Install Heartbeat and MySQL Proxy on both load balancer boxes: ```bash sudo apt-get install heartbeat mysql-proxy --yes ``` ## Configure and run MySQL Proxy Open the firewall on the database boxes so the load balancers can connect: ```bash # On db-01 and db-02 sudo iptables -I INPUT 4 -p tcp \ --dport mysql -s lb-db-01.vm.xeriom.net -j ACCEPT sudo iptables -I INPUT 4 -p tcp \ --dport mysql -s lb-db-02.vm.xeriom.net -j ACCEPT sudo sh -c "iptables-save -c > /etc/iptables.rules" ``` If you followed the previous post, you'll probably want to remove the rule that allowed your test box to access MySQL on the floating IP. Not critical right now, but good hygiene -- and it becomes important in production. ```bash # On db-01 and db-02 sudo iptables -D INPUT -p tcp --dport mysql -s 193.214.108.10 \ -d 193.214.108.243 -j ACCEPT sudo sh -c "iptables-save -c > /etc/iptables.rules" ``` (Swap in your own floating IP and test box IP, or you'll get a "bad rule" error.) You'll also need to open the MySQL Proxy port on the load balancer boxes. Note that MySQL Proxy listens on port 4040, not the standard MySQL port 3306. My test box here is 193.219.108.10 -- substitute your own. ```bash # On lb-db-01 sudo iptables -I INPUT 4 -p tcp \ --dport 4040 -d lb-db-01.vm.xeriom.net -s 193.219.108.10 -j ACCEPT sudo sh -c "iptables-save -c > /etc/iptables.rules" ``` ```bash # On lb-db-02 sudo iptables -I INPUT 4 -p tcp \ --dport 4040 -d lb-db-02.vm.xeriom.net -s 193.219.108.10 -j ACCEPT sudo sh -c "iptables-save -c > /etc/iptables.rules" ``` Start the proxy on both boxes, pointing it at the real database servers, then test from your test box: ```bash sudo /usr/sbin/mysql-proxy \ --proxy-backend-addresses=db-01.vm.xeriom.net:3306 \ --proxy-backend-addresses=db-02.vm.xeriom.net:3306 \ --daemon ``` ```bash # On the test box mysql -u some_user -p'some_other_password' -h lb-db-01.vm.xeriom.net mysql> \q mysql -u some_user -p'some_other_password' -h lb-db-02.vm.xeriom.net mysql> \q ``` If MySQL tells you the load balancer hosts don't have access, log into the database nodes and grant permissions using the hostname from the error message: ``` ERROR 1130 (00000): Host 'lb-db-01' is not allowed to connect to this MySQL server ``` ```bash # On db-01 and db-02 mysql -u root -p Enter password: [Enter your MySQL root password] mysql> grant all on my_application.* to 'some_user'@'lb-db-01' identified by 'some_other_password'; mysql> grant all on my_application.* to 'some_user'@'lb-db-02' identified by 'some_other_password'; mysql> \q ``` If you got MySQL prompts both times, both proxies are working. Now tighten things up: remove the rules allowing direct access to each load balancer node and add rules that only permit access via the floating IP: ```bash # On lb-db-01 sudo iptables -D INPUT -p tcp \ --dport 4040 -d lb-db-01.vm.xeriom.net -s 193.219.108.10 \ -j ACCEPT sudo iptables -I INPUT 4 -p tcp \ --dport 4040 -d 193.219.108.243 -s 193.219.108.10 \ -j ACCEPT sudo sh -c "iptables-save -c > /etc/iptables.rules" ``` ```bash # On lb-db-02 sudo iptables -D INPUT -p tcp \ --dport 4040 -d lb-db-02.vm.xeriom.net -s 193.219.108.10 \ -j ACCEPT sudo iptables -I INPUT 4 -p tcp \ --dport 4040 -d 193.219.108.243 -s 193.219.108.10 \ -j ACCEPT sudo sh -c "iptables-save -c > /etc/iptables.rules" ``` ## Configure and run Heartbeat Open up the firewall for Heartbeat communication and populate its configuration files. ```bash # On lb-db-01 sudo iptables -I INPUT 4 -p udp \ --dport 694 -s lb-db-02.vm.xeriom.net -j ACCEPT sudo sh -c "iptables-save -c > /etc/iptables.rules" ``` ```bash # On lb-db-02 sudo iptables -I INPUT 4 -p udp \ --dport 694 -s lb-db-01.vm.xeriom.net -j ACCEPT sudo sh -c "iptables-save -c > /etc/iptables.rules" ``` ```bash # On both load balancer boxes sudo cp /usr/share/doc/heartbeat/authkeys /etc/ha.d/ sudo sh -c "zcat /usr/share/doc/heartbeat/ha.cf.gz > /etc/ha.d/ha.cf" sudo sh -c "zcat /usr/share/doc/heartbeat/haresources.gz > /etc/ha.d/haresources" ``` Lock down `authkeys`: ```bash sudo chmod go-wrx /etc/ha.d/authkeys ``` Edit `/etc/ha.d/authkeys` and add a password: ``` auth 2 2 sha1 your-password-here ``` Configure `ha.cf` for your network. Node names **must** match the output of `uname -n`: ``` logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 30 initdead 120 bcast eth0 udpport 694 auto_failback on node lb-db-01.vm.xeriom.net node lb-db-02.vm.xeriom.net ``` Edit `/etc/ha.d/haresources` to assign the floating IP, with lb-db-01 as the preferred node. This file must be *identical* on both boxes: ``` lb-db-01.vm.xeriom.net 193.219.108.243 ``` If you had Heartbeat running on the database boxes from the last article, remove it now: ```bash # On the database boxes sudo apt-get remove heartbeat ``` Then remove the IP alias from eth0 on both database boxes: ```bash # On the database boxes sudo ifconfig eth0 inet 193.219.108.243 -alias ``` Now fire up Heartbeat on the load balancer boxes: ```bash # On lb-db-01 then lb-db-02 sudo /etc/init.d/heartbeat restart ``` ## Testing, testing, testing Connect to the floating IP from your test box: ```bash mysql -u some_user -p'some_other_password' -h 193.214.108.243 my_application ``` Here's the testing procedure. At every step, your query should return results: 1. Run a query such as `show processlist;` 2. Shut down db-01 3. Run the query again 4. Start db-01 5. Shut down db-02 6. Run the query again 7. Start db-02 8. Shut down lb-db-01 9. Run the query again 10. Shut down db-01 11. Run the query again 12. Start db-01 13. Shut down db-02 14. Run the query again 15. Start db-02 16. Start lb-db-01 17. Run the query again If your query succeeded every time, congratulations -- you've got a load-balanced, highly available MySQL instance. ## Where to go from here High availability and load balancing don't protect you from mistakes. Back up often, and verify that you can *restore* from those backups. You might also want to look into building a MySQL binlog-only server for point-in-time recovery. MySQL Proxy speaks [Lua](http://www.lua.org/). Learning to write Lua scripts for it opens up some powerful possibilities -- query rewriting, read/write splitting, and more. I haven't documented scaling beyond two load balancers and two database nodes here. It's possible, but don't just add more master-master nodes without doing your homework. Depending on your data and access patterns, you might be better served by sharding, federation, master-slave replication with read replicas, or schema optimisation. How you scale your database depends entirely on your data and how you use it. Do the research... and be sure to blog about it and let me know how it goes.
Questions or thoughts? Get in touch.