4 Methods for Implementing Quality Assurance Calibration in Your Customer Service Team
What is QA Calibration and top tips
Customer service quality assurance calibration is a process that ensures that customer service levels are being met. It is critical as it helps CS leaders to identify the gaps in their customer service processes and training, and improve them.
Implementing calibration when you've never done it before may seem daunting, so let me start with some quick tips:
Get stuck in! There's nothing to lose by implementing this system. By designing a calibration process, you will find other parts of your operation that need improving. What really matters, however, is how you introduce it.
Choose the pace that suits you best. There's no need to go from 0 to 100 straight away. Laying the foundations correctly is far more important than getting it to full efficiency as soon as possible.
Choose the method that best suits your current processes and your long-term customer service goals.
How many tickets should you calibrate each week/month?
Well, it depends on a lot of factors. I recommend you first take into consideration the following:
Customer Service Team size
Number of CS teams
Experience level of your reviewers / agents
Number of conducted reviews
Industry: if you work in a regulated industry, you may need to follow specific processes, and therefore, wish to do reviews more frequently to ensure adherence
The time you want to spend on your control loop (QA)
The first indicator of how many calibration reviews to conduct is how many regular reviews you currently carry out. On average, our customers calibrate 10% of their total reviews. I would say that should be your benchmark for an efficient process, but if you can afford to do more, do more!
Quickly, through your calibrated score, you'll be able to see who is aligned and who is less aligned. Of course, the larger the gap is, the less aligned you are as a team.
Look out for a bigger gap when you have new reviewers that aren't as familiar with your best practices. In the early days, you may want to do more calibrations for these newbies until you feel that they are rating in the same way as others and getting similar tickets.
What different methods can you use?
1. Review, grade, and collectively discuss
The same tickets are reviewed by multiple people & results are examined together.
Each member of the QA team receives & grades the interactions before the calibration session. Their scores are submitted independently.
Someone compiles the scores and highlights the differences (or automatize ticket reviews with a QA solution).
The group discusses the differences, and it should end with everyone in the group agreeing on a calibrated score.
What are the benefits of this approach?
First of all, instead of spending time summarizing what everybody already knows, your sessions will be focused on differences.
Everyone plays a part in solving a problem or making a decision by having their say: No one loses out.
It will help you avoid “groupthink,” get the most honest response from every member, and with the help of a QA solution, streamline the entire process to save you a lot of time.
2. Review, grade, and discuss 1:1
The same tickets are reviewed by multiple people & results are calibrated by a master reviewer. Same as #1 to begin with, everyone submits their scores independently.
Run an analysis on scoring gaps that appear (a QA solution calculates the calibration scores).
The master reviewer - or certified coach - highlights where the biggest differences are.
(S)he holds 1-1 sessions with each reviewer to coach the areas which need the most improvement on a case-by-case basis. Meetings focus on the most critical areas & next steps are planned.
What are the advantages?
This is the most efficient method for each reviewer.
You don't involve people who don't need to be coached, which also means that you can spend more time with those who need it the most.
The meetings become highly focused: feedback is explicitly relevant to each reviewer, which is very valuable.
3. Review, discuss 1:1, and grade
The master reviewer takes a sample of interactions from each reviewer and shares feedback with each person individually.
Each reviewer goes about their usual work of conducting reviews.
Then, the master reviewer takes a random sample of each reviewer's interactions to calibrate. (Note: it must be an adequate number of interactions from each person in order to draw meaningful conclusions).
The master reviewer then scores these reviews and holds 1:1s with each reviewer.
Why is this method interesting?
This is where you can really dive into the comments left by your reviewers. Scoring gaps are important too, but an in-depth review of comments will help you understand where you need to focus your coaching sessions.
It doesn't pollute the system with duplicate reviews (i.e. two people don't review the same interaction, which is also time-saving for the reviewers).
Excellent process for understanding & correcting the individual knowledge gaps of individual reviewers (but it's more time-consuming for the master reviewer than #2).
4. Review and grade together
All members of the QA team meet.
They review and grade the interactions together.
What is the value of this system?
Great when rolling out a new form or a new process: it gives everyone the opportunity to discuss and work through it step by step.
Popular amongst teams that don't have a lot of time, but know they need to calibrate.
Minimal preparation before sessions.
Watch out for the downsides:
Meetings are longer and can be quite time-consuming.
On a regular basis, it's difficult to get an accurate gauge of how well-calibrated each of the team members is.
The loudest or most senior will often prevail / others won't be fully heard.
To wrap it up
Whichever method you go for, try to be efficient in meetings; the more of you there are, the more differing opinions there will be - and the fewer reviewed interactions you will get through in a session.
If you are a large organization, you can still test a calibration process out on one team. Implement, refine and adjust workflows through trial and error. When you're happy, expand it to include other teams. You'll avoid chaos this way.
Download the complete guide to Quality Assurance Calibration here.