We are all familiar with benchmarking as it pertains to business performance. The other day I decided to look up the history of the term. According to Overclock.net, an online forum on maximizing computer performance:
“A benchmark is a point of reference for a measurement. … Benchmarks were often used in situations where a surveyor would use a known height or width (of his bench) and then use angles and some common geometric algebra to determine the scale and size of other objects.”
From that humble and functional beginning, benchmarking has become a standard component of the modern-day business lexicon. I would argue that there are three types of benchmarking – two of which contradict the original reasons for the term, and one that remains true to the term’s original purpose.
The first type is called competitive benchmarking. Here’s how this one works: Somewhere a list comes out that measures some kind of operational performance – let’s say average cost per case-mix-adjusted discharge. Most often, such a list shows up in the local paper and is prepared by folks who are not necessarily healthcare professionals. The result is a conclusion by the average reader that one hospital (the one with the lowest cost) is the best one, and a competition is set off among those on the list to “improve their standing.” It is easy to see that there is an awful lot wrong with basing your assessment of any healthcare organization on a single measurement.
The second type of benchmarking has two names: punitive benchmarking and “or else” benchmarking. This type of measurement usually takes place after a senior leader has been embarrassed at an offsite business meeting or after the organization has experienced a period of poor financial performance. Perhaps the COO has just been to a GPO meeting, and at one session a slide shows up that ranks the organization 25th out of 26 hospitals in contract compliance. It really doesn’t matter what legitimate reasons you may have for your standing, you can rest assured that there is only one other person who will be taking heat worse than you – the person whose organization finished 26th. And remember, the reason for the “or else” may or may not be well-founded. It could be that you may need to improve performance against key indicators to avoid bankruptcy. It could just as easily mean that you may be directed to improve to keep the boss from being embarrassed at the next meeting. Since punitive benchmarking is always accompanied by a consequence (your raise, bonus or even your job), it is never a happy experience.
The third and final type of benchmarking – the one that remains true to the original intent of the process – is called operational benchmarking. It is pretty simple: You pick something you want to measure, such as laundry and linen costs. Then you identify a measurable relationship, such as laundry and linen cost per case-mix-adjusted discharge. Once you identify your benchmark and aggregate all of the cost components associated with the measurement, you can generate a starting number. Then, as time goes by, you can measure the impact of any changes you introduce to the process. This remains true to the original concept of the term because you have picked what you will measure, you have defined the terms of the measurement and the results are valid relative to your operation.
There are two important components of benchmarking. The first is the macro component – the benchmark itself. It is always a global measurement. The second is the micro component(s) – the interventions you introduce to affect the benchmark.
The healthcare supply chain has been wanting for utilitarian benchmarks for years. Leaders have avoided collaborating to identify and use operational benchmarking, largely out of fear. The most familiar whine ever heard is, “But we’re different!” Frankly, that hue and cry has kept us from making progress toward controlling costs. But don’t blame the supply chain leaders alone. They are often the victims of the two unproductive strains of benchmarking, and don’t feel too good about creating measurements that might be held against them.
Who can blame them for survival behavior?
We at Optimé Supply Chain understand the value of and support the practice of collaborative operational benchmarking. To that end we are looking for organizations that want to collaborate to develop share key operational benchmarks. If you are interested in participating in the development of a benchmarking tool, contact me at email@example.com.
Next month: Ten key benchmarks for discussion.