Adapting #Accelerate to Development

Accelerate: Building and Scaling High Performing Technology Organization is one one the best books to hit the shelves in a long time. Nicole Forsgren, and her co-authors, Jez Humble and Gene Kim have done the industry a huge service by providing a data-focused way of analyzing and improving performance that’s based on real research, not voodoo. In the book, they identify four key metrics which are really essential to track.

The only problem with Accelerate (not really a problem), is that the book is DevOps focused, so let’s take a look at how you’d apply the four critical metrics to the Agile-development part of the equation.

Lead Time

Lead time is the time it takes an idea to get into your customer’s hands. It encompasses the entire value stream: research, conception, product refinement, development, deployment. Everything. Most teams just look at cycle time—the time from when you pull a story off the backlog until the team marks it as done, but that’s not nearly good enough. First, every stage of the value stream has a cost associated with it that we cannot recover until we deploy. One of those costs, the cost of delay, can actually exceed the cost of development. The longer it takes, the higher the costs. Cycle time does not measure full costs. Next, the speed at which a single team works doesn’t really matter. If you’re all marching in line, one person walking faster doesn’t help the last person in line arrive any sooner. The speedster just trips on the heels of the person in front of them. In a worst case scenario, the faster team creates “inventory.” Work that’s finished but is stuck, waiting for the next step to happen. Inventory is a liability from a throughput-accounting (i.e. Lean accounting) perspective. It costs money to create inventory, but doesn’t generate revenue. In the Lean world, it’s essential to keep inventory to a minimum.

So, the only real reason to measure cycle time is to compare it to the cycle times of other stages (not other teams, but other stages in the value stream) so that you can adjust the flow though the entire system. If a team’s cycle time is too low, they may be a bottleneck. If it’s too high, a lot of their work is just waste. Use that information to tune your process to give you maximum flow (minimum lead time).

Deployment Frequency

How often do you deploy to your customers? This is a key metric in an Agile shop because it’s a good indirect indicator for agility in general. Agile depends on feedback, and you want the feedback as rapidly as possible, not just after you release, but during development. The best teams release several times a day so that they can get feedback sooner. It’s way easier to fix problems when you catch them early. You don’t have to undo or redo work, for one thing. So, in a fully agile world, you gather enough information to get started at the beginning of your iteration, then you use feedback collected during development to flush out additional details. A very high deployment frequency is a great indicator that you’re doing that right. The best shops tend to deploy a few times a day, maybe to just a subset of the entire user community, but to someone who will use the software and give you feedback. Of course, you’ll need some level of automated CI/CD pipeline to pull that off.

The next issue is “batch size,” the size of your stories. In general, stories should be small. 1 or 2 days to implement, max. If you don’t know how to make them that small, this is learnable skill. (Hire me :-)) If you’re doing Scrum, not finishing a 13 point story that takes up the entire Sprint is a big deal. Not finishing a 1-point story is not. Size your stories small, pick 4, that’s your Sprint. No need to estimate or otherwise waste time doing things that detract from creating value. It’s dumb to not deploy stories the instant they’re complete. It’s an anti-pattern in Scrum to only deploy only once, at the end of your Sprint.

In the Lean world, small batches have many advantages as well. It’s easier to adjust flow. It’s easier to fix problems. In the Agile world, you get feedback faster, so can fix problems sooner. Your metrics are more stable.

One final point. What you want here is the average team-deployment rate, not the rate for the entire organization. For example, we’ve all heard that Amazon deploys once ever 11 seconds or so. That’s nice, but not particularly useful. The more interesting metric is that on average, a team in Amazon deploys twice a day.

Mean Time to Restore (MTTR):

How quickly does the team recover from a failure? In the ops world, failure is often (not always) easy to detect. You deploy; the system crashes. In software, it’s harder. For example, you may have not completely solved the problem specified in the story. Things might take too long to be useful. The UX might be suboptimal. The list is endless.

Architecture is also a factor. Rolling out a new version of a microservice takes a couple minutes. Rolling out a new version of a monolith can take days.

Regardless of the actual problem, when you detect a failure—anything that needs fixing—how long does it take to roll out the change? This includes the time required to fix the software. I’m not just talking about deployment time. This metric is a great measure of general agility. How fast can you roll out a change of any sort?

Change Fail Percentage:

How often do you need to make a change because you got something wrong? This metric does not apply to the small incremental releases that you make during development just to verify that you’re on the right track. (I think of that as “Hey Fred, take a look at this” feedback.)  That’s just the way that Agile development works: get feedback often and adjust.  However, if you think that the story is done, and you find out that it isn’t, that’s a problem. A high number means that you’re not getting enough feedback often enough, or your not spending the time needed to understand the initial problem. It can also indicate that you’re not talking to your actual customers effectively. You are talking to them aren’t you? (As compared to getting a filtered version through a particularly politicized product organization or PO.)

So, that’s really all you need. I’ve talked about bogus KPIs in another blog post, but commonplace metrics like velocity are utterly useless for improvement, and sometimes actively destructive.

The thinking underlying Accelerate is an ongoing process, by the way. DORA—DevOps Research and Assessment—constantly collects metrics from a wide variety of sources and puts out an annual State of DevOps report. It’s not very long, and it’s worth reading every year.

Elihu Goldratt, who wrote The Goal—another essential book—said “Tell me how you measure me, and I will tell you how I will behave.” That’s absolutely correct. Be careful what you measure. The four Accelerate metrics are ones that change behavior in a good way. Use them.

Leave a Comment