When we try to observe our business performance by walking around, our observation is limited by how far our eye can see, the snapshot of time we are there, what our eye happens to notice, what our ear happens to listen to, what is in plain sight. We miss what is happening somewhere else, what we don’t happen to notice, what people are not saying, what isn’t in plain sight. So in assessing business performance, our past experience and our intuition are not illuminating lights. They are filters.
But when we use performance measures, or KPIs, they don’t have these filters. They give us a more objective picture of how our business is really performing. But the measures need to be well-designed. And most are not, because of a few common traps we inadvertently fall into.
Trap #1: Not recognising an immeasurable goal.
An immeasurable goal is an outcome or result that we want to measure, but it’s worded so broadly or vaguely that we really struggle to anchor it down in the tangible world in which it’s supposed to happen. We won’t succeed to measure a goal until we’ve made sure it’s measurable. To be measurable, the words that articulate it must make it observable.
Trap #2: Letting the weasel in.
When we produce a list of draft performance measures for our goal, it is very easy to use weasel words to describe those measures. Efficiency Ratio. Staff Productivity. Employee Engagement. Workforce Capability. Customer Loyalty. They are all weasely, because they can mean different things to different people. We’ll get better measures when we write them in plain English that avoids ambiguity and the possibility of multiple interpretations.
Trap #3: Not writing quantitative measures.
Performance measures are quantitative things. They must be specifically articulated in quantitative terms. This means that when we write a measure, we need to follow a two-part quantification recipe. Part 1 is the statistic, such as percentage, average, sum, or count. Part 2 is the data item our measure is built from, such as customer satisfaction rating, employee injuries, hours of rework, or delivery cycle time.
Trap #4: Prioritising feasibility over relevance.
“We don’t have any data for that.” That’s a common reason for not choosing measures. But often these measures are very powerful evidence of the result to be measured. If we limit our measures to the data we have, we’ll never have the data we need. Yes, we do need to take feasibility of data collection into account, but relevance trumps feasibility.
Trap #5: Writing vague measure names and descriptions.
Vague measure names and limp descriptions (or no descriptions at all) are a terrible starting point for implementing new measures. The ambiguity wastes time and effort later on, when no-one has any clue what exactly to report. Customer Satisfaction might be a good name for a measure, but it’s not a measure without a clear description: The average rating that customers, who were active in the last month, gave us on a scale of 1 to 10 for how satisfied they were with our overall service delivery to them in the past month.
Better measures mean better decisions, and that’s how business performance improves. And better measures will only come from a deliberate measure design process that makes sure our measures are the best evidence of our business’ actual performance.
Have you read?
The 100 Most And Least Diverse Colleges in America, 2016
Mobile Commerce Lessons from the Trenches: A Travel Payments Expert Provides Tips, Tactics and Insights
Stave Off Low Morale: 5 Retention Factors to Keep Great Employees From Quitting
These Nationalities Have The Best Quality Of Life In The World, 2016
The Top 20 Schools With The Best STEM Programs In America For 2016
Written by: Stacey Barr.