I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Monday, March 16, 2009

The Agile PMO: Automating Metrics Capture

The last piece of the Agile PMO puzzle is to make the data needs of the PMO non-burdensome to delivery teams. It’s all well and good to be able to get quality and performance data, but it has to be easily accessible. If it isn't, we're just taxing the teams that much more. That means we won't get this data timely or efficiently, if at all.

Automating Metrics Capture

Because they're derived directly from the asset under development, our metrics give us an objective way to index and monitor quality. What’s even better is that these metrics can be automated and run frequently. If we have continuous integration established in our teams (e.g., where the project binary is built as often as every time code is committed) we have the ability to subject the binary just built to a battery of automated quality metrics and tests. Some tests may take a long time to run, while others may run in just a few seconds. This is fine. We can construct a build pipeline to run our metrics and tests in the most efficient manner.

We can collect up-to-the-minute quality data for any project, such as:

  • What extent of unit test coverage do we have, and are all tests passing?
  • What extent of functional test coverage do we have, and are all tests passing?
  • Are we creating code that appears to be high maintenance due to complexity scores, duplication, or other poor coding practices?

And so forth.

This gives us a collection of technical and functional risk indicators that are both comprehensive and current.


By efficiently automating the capture of different quality metrics, we don't need to ask people to generate this data for us: we can lift it right off the binary. This makes data collection non-invasive to the teams, and less prone to collection error.

Tools such as Cruise and Mingle (both from ThoughtWorks Studios) have dashboards that allow people to see first hand current quality and project status across a number of different projects. This allows people in the PMO to look into what's actually happening without burdening the teams, and make far more specific and accurate status reports to project stakeholders.

All told, we're spending less effort and getting greater accuracy of what's happening within each project in the portfolio.

We now have all of the essential components of the agile PMO. Early on in this series, we talked about aligning executive and executor in how we organize, gatekeep and articulate the work to be done. Once we've done that, the data we glean on performance and quality has integrity by virtue of being solidly founded on results achieved, not hope that everything will work out in a future we've mortgaged to late stages of a project. By automating collection of this data, we can see how a project is evolving day-in and day-out with far less effort - and conjecture - than we've traditionally had in IT.

In the next and final installment of this series, we'll recap how all of these concepts and practices fit together, consider some caveats, and present some actionable items you can get started with today.