Accountability Without Effectiveness
We need to transform what we measure in government
Metrics, metrics, metrics. In government we track A LOT of information, largely quantitative, about our activities and desired outcomes. Theoretically this should help government actors track progress against work plans, action plans, project plans, strategies, priority projects, etc. and communicate that progress to their managers, executives, cabinet and the public.
Metrics are instrumental for this tracking. There are however a number of traps we get caught in when using metrics that unintentionally generate friction in our efforts to create value, innovate, or change the status quo. Collecting data and measuring progress should help us learn. That is the prime directive IMHO. Working in complexity and trying to create new forms of value for the public requires a degree of experimentation and tight feedback loops. This feedback helps us know if we are doing the right things, eliminate what doesn’t work, and measure the value created. However, when we use data and metrics only for accounting purposes, we get caught in traps.
Ten Metric Traps
- Disproportionately using metrics for accountability over other values such as learning, advocating, and answering. As I stated above: “learning is the prime directive, from which the rest should follow”;
- Prescribing metrics in an a-priori fashion that constrains discovery — sometimes you don’t know what the solution will look like or what the outcomes could be until you’ve tried something;
- Measuring it simply because we can — just because we can measure it doesn’t mean we should. Just because we don’t know how to measure something, doesn’t mean it’s impossible to measure;
- Privileging quantitative over qualitative methods due to the erroneous belief that they produce metrics that are objective and bias free. While quantitative data is great for telling us what is happening, qualitative data is needed to help us understand why something is happening as well as its human impact;
- Micro-managing through metrics — “your performance will be measured by the number of widgets you produce, how you produce them is totally up to you” isn’t really giving autonomy or creative freedom to teams;
- Choosing metrics based on what we know how to measure (instrument bias) rather than measuring what will provide the most value;
- Putting too-little thought into measures, not involving others in the discussion — in the rush to meet deadlines we haphazardly decide on key performance indicators from our cubicle;
- Measuring outputs and not outcomes — this is related to measuring what we know how. As a result we end up counting beans and determine success based on high outputs without tracking quality and effectiveness;
- Focusing on short term measures over long term impact — this is related to measuring what is easiest rather than what is most valuable. Building follow-up evaluation into projects/programs is critically important, especially when dealing with complex problems;
- Measuring process efficiency and not experience or effectiveness — If we are only measuring the number of transactions and transaction time then improvement efforts will inevitably be based in optimizing processes. Measuring experience and effectiveness, while more challenging will help us prioritize other forms of improvement.
We collect, analyze and share data for four different reasons: to account, communicate, advocate and learn. Many organizations (be they social, public or private) do not adequately or meaningfully use data to drive value creation. For nearly a decade I have been advocating for, advising, and training organizations to be more data-driven in their work. When I teach this work, accountability is only a quarter of the value that data collection provides. If I were to emphasize only one of these it would be learning. Learning is fundamental to drive continuous improvement and performance excellence. However, what I have observed is that learning is not a significant part of metrics in many organizations.
Data paves the road to the bottom. It is the lazy way to figure out what to do next. It’s obsessed with the short-term.
Data gets us the Kardashians. — Seth Godin
We’re operating in an institution where we are doing all the activities we said we were going to do and we hit all the targets we said we would hit, but still the effectiveness of our work is apparently not changing: confidence and trust in government is low, employee satisfaction is low and the needs of the public are not being met. Could this be partly because we are too focused on transactional and output metrics? Transactional and output metrics are only part of the answer. We measure the number of transactions, the number of cases open/closed, processing time, etc, but we are missing something.
What happens when we are hitting all the wrong targets but only measuring the number of targets we hit? The result: Government is highly accountable, but not very effective. Every solution for improvement then becomes about making things faster and cheaper rather than more effective or enjoyable. How might we start measuring what makes up the relationship the public has with government?