Several years ago, I visited a U.S. government agency that tracked more than 1000 different performance measures representing nearly every aspect of their operations. The measures came from multiple operational systems, a dozen or more Excel workbooks, and several employee and citizen surveys. The entire process was automated, allowing many of the measures to be updated daily or even hourly.
The end user interface was also impressive. Traffic lights blinked when they changed from yellow to red. Signal flags waved when items were trending down or when data failed to load properly. And everything was Section 508 compliant.
After sitting through a 45-minute demonstration of the state-of-the art system, the presenter asked for questions from the audience. Other attendees asked questions about the total cost to develop the system (millions of dollars), the length of time it took to get into production (more than 2 years), and the underlying technology architecture (CORBA, as I recall). Eventually, I got a chance to chime in and wondered out loud, “What will you do if most of those measures turn red at the same time?”
The presenter was visibly stunned. After a long pause, he answered, “I guess we’d all look for new jobs.”
The audience laughed but I suspect the presenter was only half-joking. Despite several years and millions of dollars, it appeared as if no one had thought about what actions they should take based on the extraordinary amount of information they were collecting. They automated measurement mania. Maybe they should have been fired.
Although I’m not sure I recognized it at the time, I was asking a question about prioritization, rather than consequences. Said another way, if two measures both turn red and there are limited resources to address them, which one should get higher priority? Answering that question requires you to first determine what actions you should take if an individual measure turns red or starts trending down quickly. And the only way to answer that question is to make sure that every measure is associated with an objective. The higher priority objective should be worked on first.
Consider priority. In many organizations it’s based on who has the most political capital or who yells the loudest; in others, it’s based on dollars invested or people involved. If you’re a strategy-focused organization, priority should be based on impact. Because impact is often difficult to determine, a reasonable approach is to establish cause and affect relationships between the objectives. Every thing else being equal, higher priority should be given to those objectives on the cause side of the equation. Fixing those first helps ensure that they don’t later drag down other successful objectives.
A few years later, I was at a conference in D.C. and attended a talk from the same government agency. This presenter waxed poetic about a million dollar software package they were deploying to address architectural flaws in their now-outdated performance management system. Its key selling point: J2EE architecture.
I’m obviously a huge believer in automating performance management; after all, that’s what I do. But focus on what will have the highest impact, rather than measuring anything and everything. Otherwise, you’re automating measurement mania.
I would have liked you to expand on your point about priorities, perhaps even give a short example. Most people have the experience of keeping the loudest voice happy and the politics of the status quo. What would giving higher priority to objectives on the cause side of the equation: look, sound or feel like? Just enough so that some of us understand some possibilities. That way if you asked us the same question about our indicators being red, we might have a better answer than the presenter.