Predictions, redux

Even though my recent post on BI predictions is still generating higher-than-normal traffic, I assumed that the annual season of prognostication had died down; after all, it’s already Feb.  But over on the BI Questions Blog, Timo strays from declaring the death of One Version of the Truth to provide some suggestions that BI/PM vendors will have to deal with in the future.  Aside from my longstanding quibble that BI and PM aren’t the same, he makes some interesting points, so I’ve reproduced his list and added my own observations:

More explicit allowance for “fuzzy” data and margins of error

BI systems grew up analyzing hard quantitative data: # units shipped, #order received, etc. It’s assumed that these “facts” are incontrovertible which is why so many vendors like to talk about one version of the truth.  Performance information, however, is rarely so black and white.  Is the distinction between 3.4 and 3.5 on a customer satisfaction rating really meaningful?   Should planning data (I expect to sell N units next week) always come with a degree of confidence (I’m only XX% sure)?

Many modern performance management systems handle this latter case through the use of grading systems.  Grading systems a priori set a tolerance for the difference between actual and target. In some cases, 90% achievement is enough to get a green.  In other cases, 99% might be needed.

Support for multiple hypotheses that can be tested as new data becomes available

In the early stages of their performance management journey, people often struggle to set targets for measures and use the lack of baseline data as an excuse.  In many cases, the issue of baseline data is a red herring; the real issue is that they only measure once per year.  If the first measurement is a year away, it will be two years before they know if performance has improved.  Obviously, monthly or quarterly measurement solves this problem but in some cases the increased granularity isn’t practical.

Another alternative is to rely on both qualitative and quantitative assessments of performance.  Compare a survey of the appropriate stakeholders’ opinion of performance at the beginning of year one to a year later.  The stakeholders should be those who have a vested interest in seeing the objective improve but not those who are directly compensated on the result.  This provides another view into performance which may lead to a different hypothesis.

Much better systems for collaboration around BI

One of the downsides of effective BI is that you end up generating many more reports than any single person can use.  How do you find out about other interesting reports?  Reports should have user-assessed ratings like stars in iTunes.  The system should recommend other reports that you might be interested in based on the ones that you’ve viewed; think collaborative filtering in Amazon.  Users should be able to have interactive discussions about reports, not just static comments explaining the content.

The use of more advanced strategy management systems that go far beyond simple dashboards and scorecards, and allow every level of the organization to participate actively in the process of gathering information, and contributing to strategy.

How could I possibly disagree?  It sounds like Timo has been reading my blog.

One Response to Predictions, redux

  1. Robert E February 5, 2008 at 2:34 am #

    I really like the user-assessed ratings for reports. Any way that would encourage people to increase their view into the workings of the organization.

    This is another cool way to look at collaboration. Not only would you be getting people looking at different reports, there is a strong possiblity of getting new ideas from unexpected places.

    Of course, that means fostering an organizational culture of sharing information rather than hoarding it.

Leave a Reply

 

%d