Be fair to your agents, change the way you assess their performance

"Humans are, and will remain, at the core of customer service"

Accenture revealed that no less than 64% of consumers would switch to another provider after a bad customer service experience.

Despite recent self-service improvements through AI, customers still want to interact with real people. According to ASPECT, “86% of consumers feel that they should always have the option to transfer to a live agent. Today, customer service is often the only point of contact customers have with your brand.

Companies can’t afford to lose customers through poor customer service

Employee experience and engagement matter even more

Many creative companies manage to build, maintain and foster a great dynamic in the workplace. But, sometimes, it’s not enough. 

Customer service people are often under-appreciated. They solve problems and questions all day, performing repetitive tasks that have value beyond their direct line of sight. Everything they do is monitored and analyzed - it’s a high-pressure role and it's no surprise that turnover is usually very high. Aside from how detrimental this is to company culture, we all know how expensive hiring and onboarding can be.

As employee's engagement is more and more an element of successful customer service department strategies, having regular meetings with agents to discuss their development opportunities and to review performances is essential.

When it comes to managing individual performance, 2 main dimensions have to be taken into consideration: 

  1. KPIs: here, you mostly rely on your quantitative and qualitative metrics or KPIs. 

  2. Quality Assurance results: centered around the results from auditing interactions between agents and customers to assess communication style, tone, politeness, adherence to processes etc. 

Quality Assurance is just as important as monitoring metrics and KPIs - because you are pinpointing room for improvement. Nevertheless, despite intuitive solutions to carry out Quality Assurance, it's not yet central to every company’s customer support strategy (although it should be). Therefore, in this post, we will focus on improving the performance evaluation on the part that everyone relates to - through metrics.

The current way of tracking performance is not fair to agents

Agents are emotionally intelligent people and they are great at resolving customers’ queries. However, when they are pressured to meet demanding quantitative objectives, the quality of their work will always suffer in the long run and employees will be disheartened. Poor employee engagement means turnover, poor customer service, frustrated customers - and churn.

Below are the most frequent metrics used to assess individual performance.

Agent performance - Asset 2

A performance system based solely on quantitative metrics , so it’s important to have a mix of both types. Yet, even when you blend these metrics together, it still won’t give you the full picture.

In most companies, these metrics are calculated for each person in the team and then compared to either the quarterly/annual objectives, OKRs (Objective - Key Result) or the team average. If this is how it’s currently done at your company, perhaps you’re willing to go a step further.

How can I make the process fairer?

A common mistake made during 1-1s with support reps is looking at averages of individual metrics and then comparing them from week to week. As the weeks go by, the metrics of an individual can fluctuate significantly because you might not have enough data (especially through survey results) to get any meaningful, statistically reliable data. The type of tickets that they work, or the distribution of the work across different channels, may lead to completely different results. It is then extremely confusing to agents to see their performance zig-zag like this. It may even result in a distrust of the metrics themselves.

As a first step, managers should use moving averages to assess the performance of an individual. Instead of checking how one metric has changed from the previous week, they could instead look at the average values from the past 4 weeks.

The second step is to take into account the type (or nature) and channel of the requests that are handled by the people in their team. Not doing so is actually one of the biggest mistakes in a customer service department strategy.

Let’s take a look at a couple of examples:

1️⃣ It wouldn’t be fair to compare two agents - one that handled exclusively ‘missing item’ complaints, and another that handled ‘profile update’ requests - by the same metrics. Doing so would always suggest that the person handling (the easier, faster) profile update requests is outperforming the other. Unfortunately, they are measured by the same yardstick; in reality, the first agent might have performed exceedingly well for the requests they dealt with.

2️⃣ Let’s look at productivity (interactions solved per day). If an agent spends 50% of their time on chat and a second agent spends only 20% of theirs, the first one’s productivity would be higher because they are handling 2, 3 or 4 chats at the same time (for a greater percentage of their working hours). In this case, average productivity would not be a valid metric to draw meaningful comparisons.

It's not so much about what average you use to compare performance but, rather, how you do it.

For large teams, where there is enough data to get reliable results, there's a simple solution: weighted averages.

Let’s illustrate this with another example.

Paul is service rep with a First Contact Resolution rate of 78%. The team goal was 90% and the team achieved 88%. Paul is below both the goal and the team’s average. Does that make him a low performer? At first sight yes, but he's not. Here's why...

Let's assume that Paul worked tickets in these proportions:

  • 50% related to ‘Product damaged on arrival’

  • 30% ‘Change of address’ requests

  • 20% ‘Where is my order’ requests

‘Product damaged on arrival’ tickets usually have more interactions in this fictional company, so it’s normal that Paul's average is negatively impacted. Half of his tickets were on this subject.

Of course, Betsy (Paul's direct manager) may be aware of this, but it doesn’t help her see whether Paul has handled these requests well. An advanced reporting system would show Betsy what the average First Contact Resolution rate is for ‘Product damaged on arrival’ tickets and, hence, tell us if Paul did well. She would then do the same comparison for tickets of type ‘Change of address’, and so on.

But, would she really have time to do that for each person in her team? Ideally, yes, but that’s quite unlikely.

To save time and effort, Betsy should be able to access a benchmark metric that allows her to assess whether Paul is performing well or if he needs some support.

Agent performance - Asset 1

Here's how it works...

Let's say that the First Contact Resolution rates per topic is as follows:

  • ‘Product damaged on arrival’ - 60% 

  • ‘Change of address’ - 90%

  • ‘Where is my order’ - 82%

Paul, whose tickets were split worked 50% of tickets of type 'Product damaged on arrival', 30% of type 'Change of address' and 20% of type 'Where is my order', his benchmark should be would be computed as follows:

(20%✖️50%) ➕ (90%✖️30%) ➕ (82%✖️20%) = 53.4%

With a First Contact Resolution rate of 78%, Paul is above the benchmark, reflecting that he has a lower one compared to anyone who just like him would have worked exactly the same type of ticket in exactly the same proportions. Betsy can now (fairly) conclude that he did a better job than the average. Paul would be happy and feel fairly treated.

The ‘benchmark’ logic is simple but - without help from a Business Analyst, Data Scientist, or BI team - Betsy may simply be unable to draw such conclusions. Paul, on the other hand, might still be blamed for pulling down the Customer Satisfaction.

Whilst we used First Contact Resolution rate as an example, the same principle and reasoning can be applied to any other metric - even productivity-related ones (customer satisfaction, average handling times, first reply times).

When working with such a benchmark, does it still make sense to set individual goals?

Absolutely, but it should be done in a slightly different way. The benchmark value may fluctuate over time, and this is to be expected: not only are agents working different distributions of tickets over time, but they are also subject to other factors (e.g. peak periods or a lack of resources in the team that also have an impact on most metrics). 

We thus recommend to simply set individual targets to be around or above the benchmarks, Example: "Be 2% around your benchmark value for your customer satisfaction results and 10% above for your productivity benchmark”.

At a team or department level, maintaining hard-line targets still makes sense: fairness is no longer a concern and you may also want your team - as a whole - to be galvanised by an ambitious (but attainable) target.

If you want to move away from unfair individual performance assessments and simple averages, you should consider introducing a benchmark metric.If time, skills or resources are an issue, contact us, we’d be happy to help. In a matter of days, we can help your entire organisation benefit from a fairer way of assessing individual performance.