I’ve been on the road for the last two weeks so it’s only in the last couple of days that I’ve been able to catch up with some of the items on my to-do list. This morning I listened to a Web seminar that provided a three-step method for moving from basic measures to key performance indicators (KPIs). While the three-step method was useful, I was fascinated by a factoid that came out about ¾ of the way through the Web seminar: less than 15% of respondents to a survey were using qualitative KPIs to monitor their performance. The rest were only using quantitative KPIs.
[Note: For those of you that may not be familiar with these terms, quantitative KPIs are considered fact-based measurements like number of trouble tickets closed that are based on relational databases or transactional systems such as ERP or CRM while qualitative KPIs like customer satisfaction are seen as softer and more subjective and are mostly taken from semi-structured sources such as surveys, emails, or Excel worksheets.]
While the presenter didn’t provide many details about the respondents, the survey results don’t necessarily surprise me but they do sound a potential cautionary note. In my own consulting work, I often recommend that an organization start a performance management deployment with only qualitative KPIs and add quantitative ones over time as they gather more experience and better understand what they should measure. While a mix of 15% qualitative KPIs might be appropriate for a mature deployment, I got the sense that most of the survey respondents were earlier on in exploring performance management.
So, why the cautionary note? In my experience, when organizations start with metrics based on transactional data, they fall prey to the “we track it because we can” syndrome. When metrics are created simply because the data is available, they often bear little connection to organizational objectives. For example, a lot of contact centers have created a metric for the number of calls handled by each contact center agent. However, that quantitative metric isn’t appropriate if the primary objective for the contact center is either increasing customer satisfaction or moving to customer self-service.
Relying on transactional systems also means that the resulting quantitative metrics reflect an organization’s activity or outputs, not the outcomes it is trying to achieve. Using the contact center again, it’s tempting to create an activity metric such as percentage of calls returned in the same day as a way of measuring customer satisfaction because, on the surface, it seems related to satisfaction and the data is contained in most call center CRM solutions. However, using this metric might encourage contact center agents to have shorter, less satisfactory, calls in order to complete more calls each day and to get higher ratings. As a result, customer satisfaction might actually decrease. Therefore, it might be more appropriate to create a qualitative metric based on a regular survey of a portion of the customer to gauge reported satisfaction. This survey metric, while subjective, more directly reflects the intended outcome.
Please don’t misinterpret me. I’m not suggesting that quantitative metrics are bad and qualitative ones are good. In fact, surveys have a wide range of potential disadvantages which can lead to unexpected biases in the qualitative metrics. Response biases such as self-lifting and the Hawthorne effect happen because respondents want to please the questioner and therefore tend to provide more favorable answers simply because they were selected for the survey. Perhaps unexpectedly, this response bias doesn’t seem to lessen when the questions are presented electronically instead of through a live interviewer. Equally troubling is what is sometimes called rosy retrospection bias in which respondent rate past events more favorably than they rated them when the event originally occurred. Qualitative metrics that are collected quarterly and ask respondents to consider the past three months are particularly susceptible to this effect.
Organizations also need to be careful about survey fatigue and the habit effect. I believe that individuals should not be asked to respond to surveys more often than monthly and no monthly survey should contain more than about five to seven questions. Beyond these ranges, even motivated respondents develop survey fatigue. The habit effect is a term I use when surveys contain a series of similar-looking questions and respondents fall into a habit of answering them similarly without carefully thinking about each on its own. In some cases, we can guard against this affect by varying the question format from check boxes to one-word responses to open-ended questions. However, this approach doesn’t work if you use standardized response ranges (i.e. 1 to 100); in that case, you should limit the total number of questions to counteract both survey fatigue and the habit effect.
I have no hard and fast rules about when to use qualitative KPIs and when to use quantitative ones, and relying on a mix of the two will best suit most organizations. In fact, in my experience, having both a qualitative and quantitative KPI monitoring the same objective can be extremely beneficial. It can be enlightening to see where peoples’ beliefs about the state of an objective differ from the scores coming from your operational systems. Those that differ the most are the ones that you should spend the most time on.