The Comprehension Loop: Deep-Dive QA

This article is about the lesser known, lesser performed part of Quality Assurance. We call it the Comprehension Loop. It’s about proactive checks and understanding why something is happening.
If you’re looking for QA to check on Agent Performance and check that everything’s running smoothly, you want The Feedback Loop!
Monitor Operations & Prevent Problems
The first part of the Comprehension Loop is what we would consider to be the 'standard' process by which you check your quality across the board. These QA checks are put in place to monitor the activity related to new things (processes or other changes) even if there's no sign of anything going wrong, it's the pure control/safety loop.
The goal is to detect issues early enough (before they become visible through metrics) so that they can be addressed before they become a bigger issue.
💡 TSIA Example:
Process.
Whether you are adding new channels or streamlining access to customer data, new processes are required, and these need to be clearly spelled out, training provided, and employees monitored for adherence.
A good example is improving the persistence of customer interactions.
Capturing every touchpoint with a customer into a consolidated customer history does no good if employees aren't looking over recent interaction history before proceeding with a case.
Identify & Solve Specific Problems
As we said before, QA will help you find out why something is happening when it's not immediately obvious from the data. This could be finding a solution to a problem coming from a new product launch or after isolating a new problem. Therefore, QA is incredibly useful for specific deep-dives. This could be to investigate the effectiveness of the following:
A new process
New documentation
Adoption (new process adherence, documentation use etc.)
Training
Knowledge on new products/services
But it's also a chance to capture new insights and detect emerging topics.
Let’s Take an Example
This is where the PDCA framework can be applied to maximum effectiveness. Let's take the above 'Product knowledge' example again, but let's imagine it in the context of a new product launch. Let's say you're an e-commerce company that is launching a new line of shoes.
Plan
You'll plan ahead for any questions on these shoes (manufacturing, costs, potential discounts, delivery). You'll be updating knowledge bases, creating templates and training your agents on where to find all the information they need to solve your customers' requests.
Do
You deal with the new requests that come as a result of these new shoes. It seems that a lot of these new requests coming in are not being solved first time which is evident when you conduct your reviews. The data confirms this: you notice a suspiciously high re-opening rate for tickets of under the field "new shoes". As instructed, you can see that your agents are sending them to various knowledge base articles that have been set up to help customers. However, some of them aren't solving customer questions.
Check
Now you need to see what the problem is. Are your agents just linking to the wrong place? Are they selecting the wrong article, and another article could actually solve their problem? Or is the content itself an issue, are customers re-opening tickets when they are sent a specific link?
Here's where you assign reviews to your reviewers to get to the root of the problem. You automatically assign them tickets with the following criteria to investigate: channel: email, chat; reason: new shoes; ticket re-opened: yes.
Act
Let's say that there are 3 articles that contain wrong information. Through the review process, you have identified this. The action you take is might be to update all 3 of the articles and also investigate why it was wrong (in this case, it could involve some discussions with the manufacturing team).
Once you've corrected this, the process repeats:
Plan: Inform agents of the change and show them where to find the new materials
Do: They use the updated documentation
Check: Carry out the reviews and see if the issue has been resolved and, if not,
Act: Take necessary action
Case Study! @GetYourGuide
Let's take a look at an example where a company found a problem and solved it with a QA deep dive !
What the problem/challenge was:
Due to the negative CSAT analysis, we adapted our goodwill policy to make it more lenient. We found that we were now giving back too much money (x% of our net revenue and we wanted to lower it).
How they used QA to solve the solution:
We created scorecards to focus solely on the refunds we were giving out.
To narrow our search, we focused on only those refunds that, theoretically, should not have been refunded (i.e. refunds made on non-refundable tours) and analysed if they were refunded according to our current goodwill procedure or not.
What the solution was:
We found that, indeed, there was an issue with process adherence to our goodwill policy, and also common patterns as to why these refunds were being made.
Using the analysis, we revamped our goodwill material (articles available to customer service team), and we created a series of trainings and workshops.
We also used the other identified patterns and worked with our sales teams/tech teams to work on lowering costs from their side, too.
Process adherence started to raise for our goodwill procedure and, as a result, our costs started to lower.
How long it took to fix:
3-4 months.
What Tickets Should I Review?
This branch of QA isn't about the agents themselves - the goal is to understand the root cause of an issue. Of course, any valuable lessons learned about a specific agent indirectly as a consequence of the exercise should be shared.
Here, we emphasize the need to rely on metrics and ticket attributes (e.g. tags, contact reasons, channels).
Choose your reviews based on what you're trying to understand
In this case, it's perfectly acceptable to take tickets that return low CSAT in this type of QA check. This is because you're trying to understand why the scores are coming back low objectively, rather than judging agent performance on tickets that returned low CSAT.
The selection of tickets is done regardless of which agents worked on them.
Another example could be, if you want to see if new documentation is being adopted, selecting a sample of reviews where that documentation should be applied. This will tell you how effective the new process/training has been.
How Many Reviews Should I Conduct?
Depending on what problem you're trying to solve and your volumes, it could take up to 50 reviews to see what is going on and find a solution to your problem.
This part is of QA refers to the analysis you conduct to get to the bottom of a wider problem that you've identified.
In this case, there seems to be a spike in queries through email about 'cancellations', 'payment issues' and 'order changes'. Therefore, we can set up a batch of reviews for our team to investigate further based on these criteria.
Want more Quality Assurance tips? Get the full Customer Service QA guide!