The Feedback Loop: Agent Quality Assurance
We believe that there are two Quality Assurance (QA) processes that you can run. This is the Feedback Loop, the one that involves regularly checking or auditing support tickets to make sure your support team is running as it should.
To identify and solve specific problems, you will want to read about The Comprehension Loop!
Assessing the Quality of Agent’s Work
This is the part of QA that you are probably most familiar with. Here, we mean randomly checking tickets of your support agents to assess a number of different things. It could be to see if they are using the right tone, correct grammar - or whether they are doing everything they can to give an effective answer to the customer's query, the first time round.
But it's not a witch hunt!
QA shouldn't be viewed by agents as a way of pointing out their flaws - and managers should always avoid that thinking too. QA is simply a way of ensuring that the universal quality of interactions with customers is high, and it is done randomly - so it is inherently impersonal.
An efficient process should bring people together, unify them and make the team stronger. It will point out common flaws, show you where knowledge gaps appear and where documentation is lacking. When reviews are being discussed with agents, it should be from the perspective of coaching them and trying to develop their skills, not to micro-manage them.
Improving feedback and coaching by deep diving into comments left by reviewers
We believe that this part of Quality Assurance is more about the feedback and coaching that you give agents than the grading of the tickets themselves.
It's really important that you take the opportunity to tell them how they can improve and not just where.
💡 Examples of a helpful & unhelpful comments left by reviewers:
✅ Make sure you link the customer to this knowledge base article when dealing with this 'Shipping' issue.
❌ You won't stop the customer from contacting us again and helping themselves if you don't give them all the information.
The majority of our customers do bi-monthly reviews for their agents. One longer meeting and one shorter, check-in meeting.
- 1 hour: This is the time to go through the agent's reviews, speak about their KPIs and see how they are trending for that month. It's also the time to give them as much constructive feedback as possible.
- 15 mins: This shorter session is just a check in on how they are doing and to get any updates on any other projects they may be working on within the company. They can raise any short questions with you here.
Customer Service Team & Organization Performance
All the reviews that you're conducting are helping you to frame a picture on how your team as a whole is performing.
Providing effective feedback to agents and giving them the chance to develop their skills is important on an individual level, but looking holistically at all the reviews you've conducted is also very important.
Using the example above, you'll be able to see that this agent, let's call her Alice, has no problem with Tone of voice. If we were a reviewer, we would know that our team also had no problem with it, and we could assume that the Tone of voice across the company is also not a problem. No harm done here, but it's never good to assume, as we see in the next example!
When we consider Politeness, things change. After reviewing our whole team, we could (wrongly) assume that the whole organization is bad on this front, when really it's our team's issue. Alice has got it right, like the rest of the organization - but we should set our team the goal of tackling this issue - fast! In our 1-1s, we should ask agents about what it is they are struggling with in terms of politeness.
Most importantly, without a proper assessment of Product knowledge across agent, team and company, we could come to the wrong conclusions. We might assume that Alice is at fault because she is not performing like the rest of our team. This *could* be true, but after the analysis we see the whole company has a problem with Product knowledge. It's now down to us, the team that's performing above the rest, to share our knowledge and ways of tackling issues to the rest of the team. If we didn't do this, we might not be able to help the rest of the company as quickly - and Alice might still feel bad about having below-average Product knowledge when she shouldn't.
Let’s Take An Example
💡 Example: Let's take this Product knowledge example from the perspective of a QA Manager, or Team Lead.
Alice isn't showing the same level of product knowledge as the rest of the team for requests related to a new line of products. I need to find out whether this is because of poor adoption, or if there is a problem with the documentation itself (unclear, confusing information).
We carry out the reviews to see why Alice is struggling, but also to see if she is the only one. I review tickets related to these new products in my team and samples from other teams.
It seems that Alice isn't the only one. The First Time Resolution Rate for the rest of the organization is lower than it is for the rest of the agents in my team.
We check the documentation with the responsible parties, explaining that some of it is confusing/contains wrong information and we update all agents so improve their knowledge. We continue to check these tickets.
If numbers improve, and we are satisfied that we have solved the issue, we can consider the problem closed. If not, we repeat the process and keep looking.
What Tickets Should I Review?
Review at Random, across channels and types of issues!
Some support teams make the mistake of just looking at the 'bad' tickets, or the ones that return the lowest Customer Satisfaction (CSAT) scores. This is wrong for 2 reasons:
You don't actually know that the customer is rating the support that they received. They could be rating the service as poor because they had a missing delivery, for example, which can reflect on the support agent.
It defeats the purpose of the Feedback Loop! It's supposed to give you an unbiased view of how your people are currently performing. Checking 'bad' tickets is a typical example of the "comprehension loop" and not aimed to coach the agents.
How Many Reviews Should I Conduct?
Some companies review up to 10 interactions per agent per week; for others it's 50 per quarter.
Usually companies review 2-3 interactions a week for each agent. This means that they get feedback on their work every week as soon as the reviews are done, but it means that there will be 12-15 reviews to discuss in the monthly 1-1. Nevertheless, the more, the better!
The recommendation is that you take a truly random sample, with some small exceptions. For example, you might want to filter out tickets that are internal notes and closed in less than 30 seconds; there's probably not too much to review here. What you're really trying to do is get a snapshot of that agent's general performance and cherry-pick tickets (for better or worse) so that it affects their score.
Looking for all of our Quality Assurance tips? Get the full Customer Service QA guide!