I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Wednesday, November 26, 2008

The PMO Divide

This content is derived from a webinar I presented earlier this month titled The Agile PMO: Real-Time Metrics and Visibility. This is the first of a multi-part series.



We’ve all seen it: the project that reports “green” status on its stop-and-go light report for months suddenly goes red in the late stages of development. This is nothing new to IT, as projects suddenly crater all the time. But it begs the question: why does this happen as often as it does?

Program Management Offices (PMOs) are at the nexus of this. PMOs are responsible for keeping an eye on the performance of the IT project portfolio. They sit between the executives who sponsor IT projects and those who execute those projects. This means the PMO is responsible for bridging the divide between the two groups. But this divide is wider than we think. All too often we end up with overworked project managers frustrated by doing double duty managing a team and filling out status reports on one side, and angry, humiliated business sponsors blindsided by sudden project changes on the other.

Let's look at what it means to sit between executive and executor.

Facing “upward” to project sponsors, the PMO needs to be able to report status. It must show it has control over spend and that there is demonstrable progress being made. Facing “downward” the PMO needs to get spend and progress information from delivery teams. But because of the way most IT projects are structured, these aren’t easy questions to answer, and this creates an information gap.

IT projects are often structured by area of technology specialization (e.g., User Interface, middle tier, server side, database, specialists in things like ERP systems) or by component (e.g., one team works on the rating engine, one team works on the pricing engine, and so forth). This means that development of a bit of business functionality is splintered into independent effort performed by lots of specialists. Those individually-performed tasks need to be integrated and then tested from end-to-end. Integration is an opaque, under-funded phase most often scheduled to take place late in the project. End-to-end testing - the best indicator of success - can't take place until integration is complete. This means that lots of development tasks may be flagged as “complete” but they’re complete only by assertion of a developer, not by “fact” of somebody who has exercised the code from end-to-end.

What this means to the PMO is that when it looks “downward” to get an answer for somebody “upward” there’s a fair bit of conjecture in the answer. By deferring integration and testing, the whole of what we have at any given point in time is less than the sum of the parts. Code has been written, but it may not be functional, let alone useful. Measures of progress and spend are therefore highly suspect, because they are really lagging indicators of effort, not forward looking indicators of results. It also means that when we use effort as a proxy for results, we inflate our sense of progress. In traditional IT, which is effort-centric, there is nothing preventing us from reporting inflated numbers for months on end. The longer we do this, the greater the risk of being blind-sided.

This doesn’t become a serious problem in each and every project because the gap may or may not be a serious risk. The degree of exposure depends on the situation in the team. Since we know from experience that some teams seem to succeed while others fail, it’s worth exploring why this is.

In the best case scenario, reporting up to the PMO is a nuisance to a project manager. The data the PMO is asking for isn't what the PM uses to manage the project, so filling out status reports is a distraction. It can only truly be a nuisance and not represent an outright risk, though, if the team itself has all the behaviours and communications in place to complete their objectives in a business context. That is, the sum of the tasks in the project plan might not describe what needs to be done to complete business delivery, but the team itself may have the right leadership and membership so that it takes responsibility for completing the delivery. So, while there may be “leakage” on the project budget and timeline because not everything that the team does is fully and completely tasked out (and still inaccurately tracked in time entry systems and what not), the impact of this leakage is contained because the team is by its very nature working toward the goal of completion. There may be a lot of reasons why this is the case. Perhaps the team has been working together for many years and knows how to build-in contingency to cover the small overages. Or perhaps it's simply a team with few skill silos. Regardless the reason, leakage is contained when the right team dynamic is in place.

In the worst case scenario, people in silos work to complete their tasks to a point where nobody can tell them that their tasks aren’t done. Working to complete tasks, of course, isn’t quite the same as working to complete functionality. Completing UI code, services, and some server-side code does not necessarily define a complete business solution. In very large projects it's not always completely clear who is responsible for the complete solution. Is it the business analyst? The last person to commit code in support of a use case? The project manager? The QA tester? This responsibility void is made more acute by the fact that the “last mile” is the hardest: the steps necessary to integrate all the bits of code so that everything lines up and technically performs, as well as meets functional needs and satisfies non-functional requirements, is always the most difficult. In a large project structured around technology specialism (and very often made worse by a staff of “passengers” fulfilling tasks and not “drivers” completing requirements), we don’t have leakage, we have full-scale hemorrhage. No amount of contingency can cover this.

This means that in traditional IT, the PMO isn't bridging the divide. The data it gets from teams isn't reliably forward-looking. Reporting against task completion inflates progress, while spend data is simply cost-of-effort that doesn't directly translate into cost-for-results. The reported progress is inflated, and cost control is misleading.

This puts the PMO in a situation where it is underwriting the risk of the development capacity that has been sourced to complete a project. Work is being done – we know this from the timesheet data and task orders – but there’s no map from timesheet data to the degree to which a business need is functionally complete, and no way to know that it’s technically sound. In effect, the PMO is the buyer’s agent for an asset and it is underwriting the risk of developing that asset, but it’s not taking informed decisions with the state of the asset in full view at all times. To get visibility, PMOs typically try to scrutinize the minutia and decompose the project into further levels of detail and precision. Ironically, the greater the specialization baked into the plan, the more likely we are to miss the things that get the software into a functionally complete state. For all of this alleged precision, we may have more data, but in the end we have less information.

How can we bridge this divide? By managing and measuring incremental results, not collective effort. This aligns day-to-day activity with topline reporting. That, in turn, reduces our exposure to late-stage project collapse.

Ultimately, we want the PMO to have real-time and forward looking information about it's project portfolio, and to be able to get that information in a manner that is non-burdensome to project teams. But getting ourselves into this future state will require some re-alignment. In coming posts we'll look at IT organization and practice, as well as what we use as measures for progress and quality, that will allow us to do this. As our first step, we need to reconsider what it is we use for project gatekeepers, basing our decisions not on descriptions of the work we expect to do, but on the actual state of the asset under development.