Performing an operation is always challenging when dealing with K8s.

When on-prem or DBaaS like RDS or Cloud SQL, it is relatively straightforward to apply a change. You can perform a DIRECT ALTER, use a tool such as pt-osc, or even, for certain cases where async replication is in use, perform changes on replicas and failover.

In this blog post, I’ll provide some thoughts on how schema changes can be performed when running MySQL on Kubernetes

I won’t focus on DIRECT ALTERs as it is pretty straightforward to apply them. You can just connect to the MySQL service POD and perform the ALTER.

But how can we apply changes in more complex scenarios where we may want to benefit from pt-osc, gain better control over the operation, or take advantage of the K8s features?

One convenient way that I’ve found working well is the use of an external POD running the percona-toolkit. If you need other tools or more flexibility, you can, of course, use your own image.

Assuming that there is already a Secrets object holding all system user passwords, the concept is really simple. We are going to create a Job.

The use of Jobs provides a convenient way for both Devs and DBAs to apply changes, taking advantage of the following benefits

  • Full track of the event lifecycle, including execution logs
  • Auditing
  • ConfigMap options, if any, can be reused
  • You don’t have to explicitly pass credentials or endpoints to the execution, as these all are available through the Secrets objects and environmental variables
  • Execution can be scheduled in a predefined maintenance window (you can easily convert a Job to a cronjob)
  • A task can be easily and consistently executed multiple times across multiple environments

Let’s proceed to the proof of concept now.

I deployed a three-node Percona XtraDB Cluster (PXC) using the Percona Operator for MySQL. HaProxy was acting as a load balancer in front of the cluster.

These are my Secrets:

The following simple table was created in the PXC cluster:

What I wanted to do was apply the following simple schema change using pt-osc:

To do so, I created a K8s Job YAML file as below:

I’m not going to explain the YAML definition above as I believe this is pretty straightforward for you to understand and adjust.

I then applied the YAML file, and the Job was created:

As you can see above, after a few seconds the pt-osc-atsaloux-request-123456-h6t9p transitioned to a Completed status. This indicates that the execution of the event was successful.

I’m now able to review the logs using the following:

And finally confirmed that the schema changes had been successfully applied:

For the purposes of this blog post, I repeated the operation by creating a variation of the Job above as below. The difference here is that the ALTER statement was actually incorrect.

I then applied it again to create the new Job and noticed that it errored out this time:

But why? You can see in the logs Key column 'email_not_exists' doesn't exist in table

Now, I can delete the Job, if needed, make adjustments, and re-run. Jobs documentation has quite a few details for failure handling, retries, deadlines, etc. that you may find useful.

Again, this is a high-level idea, and other than schema change operations can be performed in a similar way. You can, of course, further improve or follow other approaches. In K8s, there are several ways to achieve the same result, so feel free to comment!

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments