Recently, I was working on a very unfortunate case that revolved around diverging clusters, data loss, missing important log errors, and forcing commands on Percona XtraDB Cluster (PXC). Even though PXC tries its best to explain what happens in the error log, I can vouch that it can be missed or overlooked when you do not know what to expect.

This blog post is a warning tale, an invitation to try yourself and break stuff (not in production, right?).

TLDR:
Do you know right away what happened when seeing this log?

Demonstration

Using the great https://github.com/datacharmer/dbdeployer:

Let’s write some data

Then let’s suppose someone wants to restart node 1. For some reason, they read somewhere in your internal documentation that they should bootstrap in that situation. With dbdeployer, this will translate to:

It fails, as it should.

In reality, those bootstrap mistakes happen in homemade start scripts, puppet or ansible modules, or even internal procedures applied in the wrong situation.

Why did it fail? First error to notice:

Reminder: Bootstrap should only be used when every node has been double-checked to be down; it’s a manual operation. It fails here because it was not forced and because this node was not the last to be stopped in the cluster.

Good reflex: Connecting to other mysql and check for ‘wsrep_cluster_size’ and ‘wsrep_cluster_status’ statuses before anything.

Do not: Apply blindly what this log is telling you to do.

But we are here to “fix” around and find out, so let’s bootstrap.

At this point, notice that from node1, you have:

But from node2 and node3 you will have:

Looks fishy. But does your monitoring really alert you to this?

Let’s write some more data, obviously on node1, because why not? It looks healthy.

127 will be useful later on.

Nightmare ensues

We are a few days later. You are still writing to your node. Some new reason to restart node1 comes. Maybe you want to apply a parameter.

It fails?

Reviewing logs, you would find:

Voila, We find our “127” again.

Good reflex: Depends. It would need a post of its own, but that’s a serious problem.

Do not: Force SST on this node. Because it will work, and every data inserted on node1 will be lost.

What does it mean?

When forcing bootstrap, a node will always start. It won’t ever try to connect to other nodes if they are healthy. The other nodes won’t try to connect to the third one either; from their point of view, it just never joined, so it’s not part of the cluster.

When restarting the previously bootstrapped node1 in non-bootstrapped mode, that’s the first time they all see each other in a while.

Each time a transaction is committed, it is replicated along with a sequence number (seqno). The seqno is an ever-growing number. It is used by nodes to determine if incremental state transfer is possible, or if a node state is coherent with others.

Now that node1 is no longer in bootstrap mode, node1 connects to the other members. node1 shares its state (last primary members, seqno). The other nodes correctly picked up that this seqno looks suspicious because it’s higher than their own, meaning the node joining could have applied more transactions. It could also mean it was from some other cluster.

Because nodes are in doubt, nothing will happen. Node1 is denied joining and will not do anything. It won’t try to resynchronize automatically, and it won’t touch its data. Node2 and node3 are not impacted; they will be kept as is too.

How to proceed from there will depend as there are no general guidelines. Ideally, a source of truth should be found. If both clusters applied writes, that’s the toughest situation to be in, and it’s a split brain.

Note: seqno are just numbers. Having equal seqno does not actually guarantee that the underlying transactions applied are identical, but it’s still useful as a simple sanity check. If we were to mess around even more and apply 127 transactions on node2, or even modify seqno manually in grastate.dat, we could have “interesting” results. Try it out (not in production, mind you)!

Note: If you are unaware of bootstrapping and how to properly recover, check out the documentation.

Conclusion

Bootstrap is a last resort procedure, don’t force it lightly. Do not force SST right away if a node does not want to join either. You should always check the error log first.

Fortunately, PXC does not blindly let any node join without some sanity checks.

Minimize unexpected downtime and data loss with a highly available, open source MySQL clustering solution.

Download Percona XtraDB Cluster today

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments