The technologies allowing to build highly-available (HA) MySQL solutions are in constant evolution and they cover very different needs and use cases. In order to help people choose the best HA solution for their needs, we decided,
Jay Janssen and I, to publish, on a regular basis (hopefully, this is the first), an update on the most common technologies and their state, with a focus on what type of workloads suite them best. We restricted ourselves to the open source solutions that provide automatic failover. Of course, don’t simply look at the number of Positives/Negatives items, they don’t have the same values. Should you pick any of these technologies, heavy testing is mandatory, HA is never beyond scenario that have been tested.
Percona XtraDB Cluster (PXC)
POSITIVE POINTS | NEGATIVE POINTS |
- Almost synchronous replication, very small lag if any
- Automatic failover
- At best with small transactions
- All nodes are writable
- Very small read after write lag, usually no need to care about
- Scale reads very well and to some extent, writes
- New nodes are provisioned automatically through State Snapshot Transfer (SST)
- Multi-threaded apply, greater write capacity than regular replication
- Can do geographical disaster recovery (Geo DR)
- More resilient to unresponsive nodes (swapping)
- Can resolve split-brain situations by itself
|
- Still under development, some rough edges
- Large transactions like multi-statement transactions or large write operations cause issues and are usually not a good fit
- For quorum reasons, 3 nodes are needed but one can be a lightweight arbitrator
- SST can be heavy over a Wan
- Commit are affected by the network latency, this impacts especially Geo DR
- To achieve HA, a load balancer, like haproxy, is needed
- Failover time is determined by the load balancer check frequency
- Performance is affected by the weakest/busiest node
- Foreign Keys are potential issues
- MyISAM should be avoided
- Can be mixed with regular async replication as master or slave but, slaves are not easy to reconfigure after a SST on their master
- Require careful setup of the host, swapping can lead to node expulsion from the cluster
- No manual failover mode
- Debugging some Galera protocol issues isn’t trivial
|
Percona replication manager (PRM)
Percona replication manager (PRM) uses the Linux HA Pacemaker resource manager to manage MySQL and replication and provide high-availability. Information about PRM can be found
here, the official page on the Percona web site is in the making.
POSITIVE POINTS | NEGATIVE POINTS |
- Nothing specific regarding the workload
- Unlimited number of slaves
- Slaves can have different roles
- Typically VIP based access, typically 1 writer VIP and many reader VIPs
- Also works without VIP (see the fake_mysql_novip agent)
- Detects if slave lags too much and remove reader VIPs
- All nodes are monitored
- The best slaves is picked for master after failover
- Geographical Disaster recovery possilbe with the lightweight booth protocol
- Can be operated in manual failover mode
- Graceful failover is quick, under 2s in normal conditions
- Ungraceful failover under 30s
- Distributed operation with Pacemaker, no single point of failure
- Builtin pacemaker logic, stonith, etc. Very rich and flexible.
|
- Still under development, some rough edges
- Transaction maybe lost is master crashes (async replication)
- For quorum reasons, 3 nodes are needed but one can be a lightweight arbitrator
- Only one node is writable
- Read after write may not be consistent (replication lag)
- Only scales reads
- Careful setup for the host, swapping can lead to node expulsion from the cluster
- Data inconsistency can happen if the master crashes (fix coming)
- Pacemaker is complex, logs are difficult to read and understand
|
MySQL master HA (MHA)
Like with PRM above, MySQL master HA (MHA), provides high-availability through replication. The approach is different, instead of relying on an HA framework like Pacemaker, it uses Perl scripts. Information about MHA can be found
here.
POSITIVE POINTS | NEGATIVE POINTS |
- Mature
- Nothing specific regarding the workload
- No latency effects on writes
- Can have many slaves and slaves can have different roles
- Very good binlog/relaylog handling
- Work pretty hard to minimise data loss
- Can be operated in manual failover mode
- Graceful failover is quick, under 5s in normal conditions
- If the master crashes, slaves will be consistent
- The logic is fairly easy to understand
|
- Transaction maybe lost is master crashes (async replication)
- Only one node is writable
- Read after write may not be consistent (replication lag)
- Only scales reads
- Monitoring and logic are centralized, single-point of failure, a network partition can cause a split-brain
- Custom fencing devices, custom VIP scripts, no reuse of other projects tools
- Most of the deployments are using manual failover (at least at Percona)
- Requires priviledged ssh access to read relay-logs, can be a security concern
- No monitoring of the slave to invalidate it if it lags too much or if replication is broken, need to be done by external tool like HAProxy
- Careful setup for the host, swapping can lead to node expulsion from the cluster
|
NDB Cluster
NDB cluster is the most high-end form of high-availability configuration for MySQL. It is a complete shared nothing architecture where the storage engine is distributed over multiple servers (data nodes). Probably the best starting point with NDB is the official document,
here.
POSITIVE POINTS | NEGATIVE POINTS |
- Mature
- Synchronous replication
- Very good at small transactions
- Very good at high concurrency (many client threads)
- Huge transaction capacity, more than 1M trx/s are not uncommon
- Failover can be ~1s
- No single point of failure
- Geographical disaster recovery capacity built-in
- Strong at async replication, applying by batches gives multithreaded apply at the data node level
- Can scale reads and writes, the framework implements sharding by hashes
|
- Not a drop-in replacement for Innodb, you need to tune the schema and the queries
- Not a general purpose database, some loads like reporting are just bad
- Only the Read-commited isolation level is available
- Hardware heavy, need 4 servers mininum for full HA
- Memory (RAM) hungry, even with disk-based tables
- Complex to operate, lots of parameters to adjust
- Need a load balancer for failover
- Very new foreign key support, field reports scarce on it
|
Shared storage/DRBD
Achieving high-availability use a shared storage medium is an old and well known method. It is used by nearly all the major databases. The share storage can be a DAS connected to two servers, a LUN on SAN accessible from 2 servers or a DRBD partition replicated synchronously over the network. DRBD is by bar the most common shared storage device used in the MySQL world.
POSITIVE POINTS | NEGATIVE POINTS |
- Mature
- Synchronous replication (DRBD)
- Automatic failover is easy to implement
- VIP based access
|
- Write capacity is impacted by network latency for DRBD
- SANs are expensive
- Only for InnoDB
- Standby node, a big server doing nothing
- Need a warmup period after failover to be fully operational
- Disk corruption can spread
|
Your blog is superb, what a great idea!
ReplyDeletevoyance gratuite
This post will be very useful to us....i like your blog and helpful to me..
ReplyDeleteHire PHP Mysql Developers in India