Since Valkey (a fork of Redis) is around the corner, I thought to write a short blog post about some of the configuration parts, mainly discussing how to dynamically change certain settings and persist those inside the configuration file.

Disk persistence

Let me start with a very important setting, which is “SAVE,” that helps in performing a synchronous dump of the dataset/keys to the disk that would be a perfect point-in-time snapshot for data restoration purposes or recovery.

In the below example we have some default setting for saving a snapshot.

These numbers tell the below representations.

  • 3600 1 – This one will save a snapshot of the DB every 3600 seconds if at least 1 write operation were performed.
  • 300 100  – This one will save a snapshot of the DB every 300 seconds if at least 100 write operations were performed.
  • 60 10000 – This one will save a snapshot of the DB every 60 seconds if at least 10000 write operations were performed.

We can change these settings with the help of the below dynamic command. Now, we have changed the snapshot option to “60 1” which means the save will happen in every 60 seconds if at least one write operation is performed.

So the changes work perfectly.

Further, we can persist the settings permanently in the [valkey.conf] file as below.

So, the changes reflect pretty well.

In a production environment, we should avoid running the “SAVE” command directly, as it can block the workload. BGSAVE is a good option for an ad-hoc run that works in the background and doesn’t affect the running clients.

Having a snapshot(RDB) file alone is not sufficient for restore, and we can still lose some keys/data in case of a crash or corruption in the database. Here  AOF (Append Only File) comes very handy as it ensures each write is persisted in the log file. Later on, if the server restarts, these logs can be replayed again, ensuring the original state of the dataset.

In our case, it’s already enabled here.

 

Memory usage

As Valkey/Redis is mainly used for cache purposes, the amount of data that can be stored depends on the amount of allocated memory. We can control/decide how much memory Valkey will use via the parameter [maxmemory].

Here, we allocate 256 MB to Valkey to use for its operation.

Similarly we can persist the changes as below.

 

Now what happens if the above memory allocations reach the limit?

In that case, we have some eviction policies defined under [maxmemory-policy] settings as below.

  • allkeys-lru: Keeps most recently used keys; removes least recently used (LRU) keys
  • allkeys-lfu: Keeps frequently used keys; removes least frequently used (LFU) keys
  • volatile-lru: Removes least recently used keys with the expire field set to true.
  • volatile-lfu: Removes least frequently used keys with the expire field set to true.
  • allkeys-random: Randomly removes keys to make space for the new data added.
  • volatile-random: Randomly removes keys with expire field set to true.
  • volatile-ttl: Removes keys with expire field set to true and the shortest remaining timeto-live (TTL) value.
  • noeviction: New values aren’t saved when memory limit is reached. When a database uses replication, this applies to the primary database

Reference:- https://redis.io/docs/latest/develop/reference/eviction/

Here, we set a noeviction policy that doesn’t allow any new key writes.

Server-client config

In Valkey/Redis, we can control the number of clients connected to the database with the help of [maxclients] settings.

This can also be changed dynamically by [config set] and persisted by [config rewrite].

Sometimes, the clients can get discounted while executing some heavy workload/keys. This could happen due to reaching the hard/soft limits mentioned under settings [client-output-buffer-limit].

This can affect multiple channels like pubsub model or replication/slave, etc. 

If we want to change the value with respect to a specific channel we can perform the same as below.

Similarly it can be persisted by config rewrite.

Replication

In some occasions, the slave nodes can be disconnected or lost for a long duration, and when they come back online, the need for a full resync may emerge. This can be controlled by [repl-backlog-size] settings.

Basically, the bigger the replication backlog[repl-backlog-size] size is, the longer the slave can be disconnected from the source/master. Once the slave is back again it can join to the master via a partial resync.

This can be also set dynamically and persisted to the disk.

Apart from the configurations there are some risky commands worth mentioning here. We should always be cautious while running commands like  [FLUSHALL & FLUSHDB]. These can wipe out the whole keys or dataset from the database environment.

Summary

In this blog post, I mainly explained how to adjust/configure some of the important settings in Valkey and highlighted some key configurations that can impact the workload. Since Valkey is in its early phases, stay tuned for more coverage of this technology in the future!!

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments