When we discuss performance improvement, we think about a measurable beginning baseline, an ending performance level, and the quantified difference (aka gap) to be achieved. To demonstrate a relevant gain, three things need to be in place – a valid measure of appropriate type, a valid target, and the quantified value of the difference between the baseline performance level and the target. Let’s review a few key points for each of these.
The measure must represent the desired outcome, or requirement. The customer dictates the requirement; therefore, the measure must reflect validated customer requirements. The requirements must be specifically defined, and not general or simply related. If the customer is concerned about staffing levels, are they really talking about staff retention rates, time to fill vacant positions, or the actual number of vacant positions? These are three different, but related, issues. You may hit a home run with the wrong one.
Type of Measure
There are three basic types of measures – Descriptive, Diagnostic, and Predictive. Descriptive measures include those that measure the “number of” something and “averages”. This type doesn’t allow for useful comparisons, poorly represents customer valid requirements, and only shows what has already happened. Diagnostic measures include those that measure “rates” and the “Percentage Achieving the Requirement”. A Diagnostic measure enables effective prioritization and analysis. A Descriptive measure could show that Clinic X has the most complaints, but a Diagnostic measure might indicate that Clinic Y has the highest complaint rate. A Descriptive measure might show that Department A is achieving the targeted average customer wait time of 9 minutes, but a Diagnostic measure shows that only 48 % of Department A’s customers are waiting 9 minutes or less. Diagnostic measures point to the problem which leads to root cause identification. The right measure drives the right behaviors, analytical approaches, and techniques used. Finally, Predictive Measures come in sets – outcomes and their drivers. The outcome measure can be Diagnostic, such as “The Percentage of Calls Answered Within 20 Seconds”. The drivers of the outcome should also be Diagnostic and have a statistical relationship with the outcome. Examples might include “The Percentage of Calls With Duration of 4 Minutes or Less” and “The Absentee Rate of Customer Service Representative Staff”. These drivers affect the speed of answer, and by controlling them, we can predict the outcome. Predictive measures tell us what will happen, and as such, are Prescriptive, because they also tell us what we need to measure.
A valid target establishes the credibility of the performance gap, the necessary level of effort, and the funding necessary to achieve the desired performance level. If the customer wants timely service, how timely is timely? How many minutes, hours, days, or fractions thereof? Organizational measures such as KPIs or Strategic Objective measures can be cascaded to lower levels. The targets should also be cascaded. Afterall, leaders need to know who, how, when, and from where the contributions to filling the gap will be coming from. An inappropriate target can generate a lot of wasted effort and resources. Following are some typical sources of targets:
- Validated customer requirements which are “Critical to Quality”.
- Comparative role model organization performance.
- Industry Standard or Certification Requirement.
- Previous best performance.
- Industry average/quartile
- SWAG – (Scientific Wild _ Guess)
- Totally arbitrary
You’ve probably seen many of these target approaches in action, and realized that some are better than others. They all share one common characteristic, however. They create the gap between current and desired performance and set the stage for gap-closing activities. Therefore, the selected target must be a valid target to help ensure the most efficient use of resources when improving performance.
Quantified Value of the Difference – The Gap
The difference between the actual and targeted performance level is called the gap. All gaps have a financial penalty known as the Cost of Poor Quality (COPQ). Note: This was the subject of a previous post, so I’ll abbreviate this discussion. These costs represent the consequence of the gap and can easily exceed millions of dollars annually. These costs can be categorized as the costs of failures, costs attributed to detection, and the costs associated with prevention. Some of these costs can easily be seen and quantified, while others trigger costs in other areas which may not be easily identified, and others may be intangible and not quantifiable in financial terms. All, however, are important and should be listed for leaders’ consideration. By quantifying the gap in terms of both performance and financial impact, leaders have a better perspective of business requirements and a basis for more effective prioritization of resources and determination of Return on Investment.
It’s often been said, “What gets measured gets done.” We might add to that axiom, that it’s important to have the right measure and target, and to quantify the gap financially.