When working with group replication, MySQL router would be the obvious choice for the connection layer. It is tightly coupled with the rest of the technologies since it is part of the InnoDB cluster stack.

The problem is that except for simple workloads, MySQL router’s performance is still not on par with other proxies like Haproxy or even ProxySQL. Fortunately, Haproxy and group replication can still work together as well. Let’s see how this can be accomplished.

Architecture

For our testing, let’s deploy an InnoDB ClusterSet in single-writer mode. 

There are three nodes on the primary site:

  • mysql1-t1
  • mysql2-t1
  • mysql3-t1

And three nodes on the DR site:

  • mysql1-t2
  • mysql2-t2
  • mysql3-t2

In this scenario, group replication is used locally between the nodes on each site, and both sites are linked together via asynchronous replication. Local or regional failover can be controlled from the MySQL shell. 

Similarly to using Haproxy as the connection layer for Percona XtraDB Cluster, health checks can be used to detect which member of the cluster to send reads or writes. This is usually accomplished through an xinetd service that runs the actual health check script. Here’s what it looks like; let’s discuss all the components in more detail.

Health check script

We need to create a database user with enough permissions to run any queries on the health check script. Let’s create the user on the current primary:

The queries in our sample script require a few special objects to be created in the SYS schema:

Here is an example of a simple checker script that is good enough for testing purposes. For production use, you might want to develop something more robust.

Don’t forget to give the script execute permissions:

Xinetd service

We need to deploy a custom xinetd service to expose the state of MySQL on each node. We deploy one service using port 6446 to check for the write availability of a node and a service on port 6447 to check if the node is available for reads.

First, install the xinetd package, e.g.,

Now we prepare the definitions of the xinetd services:

And:

Start the service:

Testing the service

We can verify our service is working by using telnet (or curl) from a remote host. For example, to check if a node is available as a writer, we query the service on port 6446:

We can also verify if a node can be a reader using port 6447:

Haproxy configuration

Haproxy needs to be configured with two dedicated endpoints for reads and writes, respectively. The HTTP checks defined will query our custom xinetd services to check for a node’s read/write availability. 

Here’s an example of the haproxy configuration:

Connecting our application

Here’s an example of how the application would connect to the database through Haproxy to write:

For read-only connections, it would use the alternative port as configured above:

Closing thoughts

Using this architecture, both local and regional failover can be triggered through MySQL Shell. Haproxy will automatically adjust the writer/reader nodes without human intervention. 

We can also take individual nodes out of the pool by simply stopping the MySQL service or pausing group replication.

Hopefully, in the future, MySQL router will offer similar performance to other routing solutions so that we can take advantage of the integration with the rest of the components of InnoDB cluster.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments