Performance Management

by Peter Parkes

Measurement

Historically, measurement of performance was a manual process, often by inspection, and paper based, maybe followed up with consolidation into a simple spreadsheet or database.

Ideally, measurement of business processes should be automatic and in-process where practical. In-process simply means that the measurement should fall out of the process and not introduce additional work. An example of this would be use of the call centre system itself to generate the numbers, such as the volume of calls, average waiting time, average call length and proportion of abandoned calls.

This is facilitated now by the introduction of computer systems to assist in many departments. Information can usually be pulled directly from these individual systems into a database or MS Excel with some basic knowledge of programming in spreadsheet applications. Alternatively, it could be configured and sufficient knowledge transferred in-house for its up-keep for the price of a few consultancy days by your IT department or a consultant.

Case study

Daventry Council rated the tailoring of a simple MIS system for their customer services team as having equal value to the introduction of a computer-based Customer Relationship Management System that cost 100 times more; together, these systems helped them to achieve top ranking nationally for some of their services.

What to measure?

Measures should flow from the organisation’s strategy, but this is not always visible at lower levels of the organisation, and sometimes the link gets broken. Can you relate your measures back to the organisation’s overall objectives? If not, ask your management for clarification, as otherwise the measures are not likely to help either of you, even if they are somehow met.

Sometimes measures are applied to the wrong thing; in other words, you can measure something, but what you do doesn’t really affect the outcome. One way of checking for practicality is to apply a set of criteria known as SMART (Specific, Measurable, Achievable, Realistic, and Time-based).

Points to consider

The automated collection of data offers solutions to a number of managerial problems with measurement.

Quantity

How many indicators do we measure? Although we only need a handful of indicators to get the feel for performance, different departments, such as HR, Finance, Sales, Operations and Safety, will each need a different perspective.

Always remember that manual collection and collation of performance information places a burden on staff and administrators. Automated collection reduces this overhead cost and gives us greater flexibility.

Bias

When we are asked how well we are doing, it’s obviously a bit of a leading question. Extracting data directly from the process takes out much of the bias.

Frequency

When do we need the info? Of course, we need it when we need it. This is not necessarily at a specific time interval, such as monthly. It is the old IT dream of ‘the right information at the right time, at the touch of a button’. When using manual systems, it’s important not to overburden operational teams or administration with measurement at frequencies which offer no value. If we have an annual target, do we really need to measure every week?

Affecting what you measure

In any system, measurement can, and usually will cause changes to the system. As we emphasise the bits that we measure, we also affect those bits that we don’t.

People react to what is being measured; that is, they change their behaviour as a result of their knowing about a specific measure. Decide whether measures should be visible or invisible to the operating team.

Audit trail

Unless they are system based, many measures lack supporting data to qualify them. It is not uncommon in auditor’s reports to see half the measures qualified as ‘client data – unsupported’, in other words, someone was asked for a number and gave one.

Human behaviour and measurements

Advocates of measures often quote the saying ‘What gets measured gets done’. This is now being used to illustrate the dysfunctional behaviour that can be caused by introducing poorly thought out measures, as the converse is that what we are not measuring is not important and so is less likely to get done. People think that it’s only what you count that is important.

There is nothing so useless as doing efficiently that which should not be done at all.

Peter Drucker

Imagine trying to drive your car with only a fuel consumption gauge and being told that more is better. Poor measures can lead to introducing more and more measures – just to get people to do their job, not to improve performance. This is clearly counter productive, over burdening, and leads to unnecessary stress in staff.

One way around this problem is to record metrics for management purposes only, but not to use them as a ‘name and shame’ tactic. Well-implemented measures can be gathered from within process, and do not need additional work or even knowledge of staff, but can be used to inform supervisors and managers of problems where they may need to inject more resources, in extra staff, training or technology, for instance. Of course, we need to ensure that it is understood that we are not being secretive by involving staff in the discussion of ‘what and how’.

Once we have the technology, there is a tendency to overuse it. Just because something is easy to measure, this doesn’t mean that it is useful. Conversely, just because something is difficult to measure, this doesn’t mean that we should measure something easy instead.

Measures gone wrong

The classic example of measures creating dysfunctional behaviour is in call centres. As call centre technology became established and commoditised, the call centre function started to be outsourced and competed for on price. With this move came the need to reduce costs – classic ground for introducing measures...

Have you ever phoned a call centre and been passed on, or worse, got through but the line went dead? This is the human response to staff being measured on volume of calls taken. As soon as a call exceeds the benchmark of, say, five minutes, they pass you on. Alternatively, they have to hang up on following calls to get back to average.

It is obvious that the best measure for a call centre is resolution of customers’ problems. But that is somewhat less easy to measure, and would involve customers signing off jobs to say they are happy with the resolution. The alternative is to carry out customer satisfaction surveys after the event. This is scary, as you may find out that your satisfaction level is below 40 per cent with the best technology, whereas the Johnny Come-Lately competitor is scoring over 90 per cent. Thinking in terms of output and input measures, an obvious input measure here is the extent (or lack of) customer services training.