Skip to main content

Fallacy #7: Transport cost is zero

Of course, there are upfront and ongoing costs associated with any computer network. The servers themselves, cabling, network switches, racks, load balancers, firewalls, power equipment, air handling, security, rent/mortgage, not to mention experienced staff to keep it all running smoothly, all come with a cost.

Companies today have, for the most part, accepted this as just another cost of doing business in the modern world.

With cloud-based server resources, this equation changes only slightly. Instead of paying for a lot of these things upfront, we instead lease them from the cloud providers. It may change how a company can represent these costs on a balance sheet, but overall, it’s the same concept.

But, we also have to pay for bandwidth. In order to connect our data center to the rest of the world, we must exchange currency for the transport of our bits and bytes. In the cloud, we must pay this also, whether directly or as part of the cost of whichever service we’re using.

From Udi Dahan's free Distributed Systems Design Fundamentals video course

🔗The hidden cost

However, there’s another component to transport cost that doesn’t get talked about enough, and that is the cost paid in serialization and deserialization. In order to have distributed systems, it’s necessary to take objects in memory, serialize them for transmission on the network, and then deserialize them at the other end.

It may seem like no big deal, but it’s important to remember that serialization doesn’t usually happen in code that we measure. Instead we profile things like web controllers…after the deserialization step has already been done.

If you’re in a cloud environment, you are paying CPU hourly costs, and serialization/deserialization costs directly affect that. Cloud providers have been very successful at marketing their services by saying it’s only pennies per hour, which is true. However, there are 730 hours in an average month, and all these pennies tend to add up.

When you introduce elastic scale-out to the equation, it gets worse. Sure, you can scale out to 1000 nodes if need be, but then pennies per hour turn into tens of dollars per hour.

Then the question becomes “how much serialization are we actually doing?” The more that we distribute, the larger the proportion of time we will spend serializing and deserializing data.

🔗Solutions

One great thing about the cloud, from a developer’s point of view, is that it gives us monthly feedback into how stupid we were when we designed our software. If we designed an inefficient system, we will pay for it directly in hard currency. In an on-premises system, this would tend to get ignored, but the cloud makes it impossible to ignore.

So how do we optimize for these costs?

Design cost-effective systems: Get FREE access to Udi Dahan's Distributed Systems Design Fundamentals video course for a limited time.

🔗Beware premature optimization

We can’t neglect to consider the cost of development effort. Trade-offs must be made between infrastructure costs and development costs, both upfront and ongoing. Compared to the cost of a developer’s time, infrastructure costs may be too small to merit addressing inefficiencies.

However, this doesn’t mean we should design our systems poorly on purpose.

The effect of serialization on performance further strengthens the argument given by the 2nd and 3rd fallacies to avoid incessant chit-chat over the network. We should attempt to keep our payloads as small as possible, being careful to determine exactly how much data we will need to ship across the network.

🔗Right format for the job

We should be mindful of serialization formats as well. In light of the 7th fallacy, bloated serialization formats like the SOAP and WS-* series of specifications seem quite onerous. When possible, consider more compact serialization formats instead.

At a bare minimum, JSON offers output that is well suited for describing data object models and is much more compact than anything based upon XML. It’s the nearly-universally-accepted language of RESTful web services. But even JSON is a bit fat, requiring lots of quotation marks, curly braces, spelled-out constants like true and false — all for the sake of being human-readable and easily parseable. For performance critical applications, consider memory-optimized, lightweight formats like MessagePack or Google’s Protocol Buffers.

It’s important to remember that there is no one perfect serialization format for every situation. At times, it is preferable to have a binary representation of data to optimize for raw speed. Other times, the situation may call for a human-readable format, as making it easy for a developer to view and comprehend the data may be of more business value than a more opaque format that happens to be faster.

🔗Summary

There are some fixed and ongoing costs in software development that we simply can’t do anything about, and that we usually take for granted. However, the cost of serialization and deserialization is something that is under our control. When running in the cloud, we directly pay for it, whether we realize it or not.

By choosing serialization formats that are appropriate to our use case and are as efficient as possible, as well as by being careful about how much data we move around the network at a time, we can ensure that these costs are as low as possible.

And, while one instance of serialization or deserialization is seldom noticeable to the end user in isolation, careful consideration of the 7th fallacy will likely result in systems that, in aggregate, are indeed faster and more responsive for our end users.

Share on Twitter

About the author

David Boike

David Boike is a developer at Particular Software who doesn't enjoy the high transport costs for his craft beer, but pays it anyway.

Don't miss a thing. Sign up today and we'll send you an email when new posts come out.
Thank you for subscribing. We'll be in touch soon.
 
We collect and use this information in accordance with our privacy policy.