Qualitative KPIs

I’ve been on the road for the last two weeks so it’s only in the last couple of days that I’ve been able to catch up with some of the items on my to-do list.  This morning I listened to a Web seminar that provided a three-step method for moving from basic measures to key performance indicators (KPIs).  While the three-step method was useful, I was fascinated by a factoid that came out about ¾ of the way through the Web seminar: less than 15% of respondents to a survey were using qualitative KPIs to monitor their performance.  The rest were only using quantitative KPIs. 

[Note: For those of you that may not be familiar with these terms, quantitative KPIs are considered fact-based measurements like number of trouble tickets closed that are based on relational databases or transactional systems such as ERP or CRM while qualitative KPIs like customer satisfaction are seen as softer and more subjective and are mostly taken from semi-structured sources such as surveys, emails, or Excel worksheets.] 

While the presenter didn’t provide many details about the respondents, the survey results don’t necessarily surprise me but they do sound a potential cautionary note.  In my own consulting work, I often recommend that an organization start a performance management deployment with only qualitative KPIs and add quantitative ones over time as they gather more experience and better understand what they should measure.  While a mix of 15% qualitative KPIs might be appropriate for a mature deployment, I got the sense that most of the survey respondents were earlier on in exploring performance management. 

So, why the cautionary note?  In my experience, when organizations start with metrics based on transactional data, they fall prey to the “we track it because we can” syndrome. When metrics are created simply because the data is available, they often bear little connection to organizational objectives. For example, a lot of contact centers have created a metric for the number of calls handled by each contact center agent. However, that quantitative metric isn’t appropriate if the primary objective for the contact center is either increasing customer satisfaction or moving to customer self-service. 

Relying on transactional systems also means that the resulting quantitative metrics reflect an organization’s activity or outputs, not the outcomes it is trying to achieve. Using the contact center again, it’s tempting to create an activity metric such as percentage of calls returned in the same day as a way of measuring customer satisfaction because, on the surface, it seems related to satisfaction and the data is contained in most call center CRM solutions. However, using this metric might encourage contact center agents to have shorter, less satisfactory, calls in order to complete more calls each day and to get higher ratings.  As a result, customer satisfaction might actually decrease.  Therefore, it might be more appropriate to create a qualitative metric based on a regular survey of a portion of the customer to gauge reported satisfaction. This survey metric, while subjective, more directly reflects the intended outcome. 

Please don’t misinterpret me. I’m not suggesting that quantitative metrics are bad and qualitative ones are good.  In fact, surveys have a wide range of potential disadvantages which can lead to unexpected biases in the qualitative metrics. Response biases such as self-lifting and the Hawthorne effect happen because respondents want to please the questioner and therefore tend to provide more favorable answers simply because they were selected for the survey.  Perhaps unexpectedly, this response bias doesn’t seem to lessen when the questions are presented electronically instead of through a live interviewer.  Equally troubling is what is sometimes called rosy retrospection bias in which respondent rate past events more favorably than they rated them when the event originally occurred.  Qualitative metrics that are collected quarterly and ask respondents to consider the past three months are particularly susceptible to this effect. 

Organizations also need to be careful about survey fatigue and the habit effect.  I believe that individuals should not be asked to respond to surveys more often than monthly and no monthly survey should contain more than about five to seven questions.  Beyond these ranges, even motivated respondents develop survey fatigue.  The habit effect is a term I use when surveys contain a series of similar-looking questions and respondents fall into a habit of answering them similarly without carefully thinking about each on its own. In some cases, we can guard against this affect by varying the question format from check boxes to one-word responses to open-ended questions.  However, this approach doesn’t work if you use standardized response ranges (i.e. 1 to 100); in that case, you should limit the total number of questions to counteract both survey fatigue and the habit effect. 

I have no hard and fast rules about when to use qualitative KPIs and when to use quantitative ones, and relying on a mix of the two will best suit most organizations.  In fact, in my experience, having both a qualitative and quantitative KPI monitoring the same objective can be extremely beneficial.  It can be enlightening to see where peoples’ beliefs about the state of an objective differ from the scores coming from your operational systems.  Those that differ the most are the ones that you should spend the most time on.

3 Responses to Qualitative KPIs

  1. Robert E September 1, 2006 at 10:00 pm #

    After putting in all the disclaimers about qualitative measures, I was surprised that you didn’t mention the false sense of certainty that is usually associated with quantitative. As you mention, there are any number of biases or motivations that can affect the usefulness of qualitative measures. There are also any number of biases or motivations that go along with collecting numbers and publishing numbers. At every level in an organization people can artifically inflate or “game” the numbers. And while your example of the returning calls in the same day is a good one, there is also the numerous examples found in the business section with companies restating their earnings. Since it seems that workers, executives, and politicians have an inclination to show only the numbers that make them look good, context is important to understand the quality of the quantity.

  2. Kurt Bilafer September 20, 2006 at 10:38 pm #

    I think the real value of qualitative KPIs is captured in your last paragraph.
    Qualitative KPIs when benchmarked to quantitative KPIs for the same objective, help identify areas of significant misalignment. Since alignment is seen as the the number 1 cause of poor execution, see the most recent McKinsey survey http://www.mckinseyquarterly.com/article_page.aspx?ar=1819&L2=21&L3=37, it makes sense to have qualitative KPIs for each Strategic Objective.

    I think the bigger issue is ensuring that you are surveying the correct people and asking the right questions.

  3. Jonathan September 21, 2006 at 2:05 am #

    Robert and Kurt: Good points both of you. I emphasized the issues with qualtative KPIs because I don’t think many people understand the issue and even fewer are talking about it. The attitiude is typically just “I’ll create a survey that asks a few questions. How hard can that be?” In reality, bad survey design causes bad data which leads to bad decisions. Yes, use qualitative KPIs — just don’t shortchange the design process because they are “subjective”.

Leave a Reply

 

%d bloggers like this: