Case Study

How one Outsourcer switched from a Traditional Quality Model to an Innovative Model using Scorebuddy.

Kieran Headshot
"The first step we took as part of our strategy, was to decide what platform would be used that would provide analytics and reporting we could do with ease. We needed to make sure we could access the data easily. We selected Scorebuddy!”"

- Kieran McCarthy, Head of Quality at Voxpro



According to Kieran McCarthy, Improvement Strategist for Voxpro, a traditional quality model is designed to involve a substantial number of quality leads/evaluators, who review support tickets, chats or listen to agent calls. So quality leads might spend 60-70% of their time reviewing what people did previously—one hour or one week ago; trying to find something in a huge haystack of calls that informs them enough to have a substantive coaching conversation; and then the quality lead or supervisor has the opportunity to meet with the agent who handled the call. Finding the opportunity to have that conversation is also challenge because agents’ are tied up taking phone calls and answering emails and chats.

Apart from the operational challenges of traditional QA processes it is a significant investment. On average there are 15 agents for each team lead and 30 agents for every quality lead. Even if a lead can review 1 or 2 interactions / agents per month, team leads are going to spend an inordinate amount of time reading tickets; before they have gotten around to speaking with the agents about what they have found.

For example, 10 quality leads costing 30k € / year = 300k€; where at least 50% of their time is spent reviewing someone else’s work @ 150,000€. At the same time once they have completed reviewing the work of agents, they have still not begun to work on changing agent behaviors.

In addition, what exactly is the team lead listening or looking for? Are they looking for soft skills, empathy, ownership, or knowledge; all very subjective terms; resulting in emotional conversations that are likely unproductive. When the team or quality lead finally meets with the agent, they begin by having a discussion or sometimes even an argument about what those terms really mean.

On top of that at Voxpro has multicultural, multilingual staff, adding complexity to the need to have quality leads that speak one of the 13 languages supported by Voxpro, and who can be made available to review the recordings.

This is typical of most Outsourcers whose agents are conducting 70-80 calls / week and yet are only reviewing 1-2% of the calls they handle, consuming 50% of their QA resources and even after all that is accomplished, they end up having disagreements with agents about the results.

“Clients really care about how much work are you doing (productivity) and how well are you doing it (quality); after all these are the only real measurements from the Outsourcer’s client perspective.”

We saw that this traditional quality model was inefficient, and worse than that there was no correlation between the quality scores and key metrics that our clients care about, namely NPS / CSAT from a CX perspective or the amount of work we are getting done.


Taking those things into account Kieran and his team decided to try and do things in a different way. Heretofore, scoring had been done in excel spreadsheets, so metadata was not available.

“The first step we took as part of our strategy, was to decide what platform would be used that would provide analytics and reporting we could do with ease. We needed to make sure we could access the data easily. We selected Scorebuddy!”

The next step was to move Quality out of the hands of the few--quality leads, and into the hands of the many—the agents. 2% of their more than 4,000 staff are agents. So, there are quite a few agents who participated in this step.

The strategy moved to how to educate the agents in self scoring / quality review of their own work; since they were likely to be more self-critical than if someone else were to score them. It was determined that it would be far more efficient and effective to have agents review their own work and so we built a more simplistic quality card with 3 main things we were looking for; and requiring the agent to review the ticket if, and only if, they received a negative survey.

We were looking for:

1: Did you authenticate the caller properly? (awareness)

2: What do you believe was the main reason for any detractor or negative review? (awareness)

3: Could you have done anything differently, and if so, what? (awareness)

People inherently want to do better, and we noted that it was to their benefit that if a score is poor, it would allow us to conduct a productive, awareness quality coaching session. It was clear that we could coach the agents more effectively, if their CSAT and Quality scores didn’t match up.

Agents had several reactions to this process:

  • 90% were delighted. A huge amount of self-evaluations began happening quickly.
  • Those that had bad surveys every month, didn’t complete a self-evaluation; and it screamed to us that they were disconnected. It came down to a yes or no, did you review your bad quality survey or not.

The question then arose, how many were evaluating surveys correctly? Using Scorebuddy and the huge amount of metadata we had gathered quite quickly, we were able to view the data easily.


This was more than 1.5 years ago, and within 2 months we had 80% compliance and the data was rich; we were looking at evaluations from the agents’ perspective, not from the team leads perspective. All the metadata we gathered had taken no time away from agents or team leads.

Analysis of the data showed that 40% of the agents said they could have done a better job! Comments were made to suggest where and what they could have done better.

Within a few months they had gathered enough data to give context to our workshops and to bring in focus groups to help to agents get better.

Quality leads were now freed up to look at the data. They had the time to do root cause analysis[1] and make improvements rather than spend all their time listening to phone calls and looking at tickets.


In one of Voxpro’s contracts applied this same strategy and were able to detect where the problems were coming from. Kieran’s team discovered that the Scorebuddy analytical tool was not being used so they trained people to use it. They were then able to move 500 agents from 42 to 56 in NPS scores in 3 months. As a result of these improvements, they gained additional business with a major account.

For Outsourcers working with a scorecard system like Scorebuddy, the primary problem is that you take the ecosystem that your clients use when they come on board. You need to integrate their / the client’s CRM into the way you as the Outsourcer works. So, it is very difficult to standardize platforms across all of your customers, in terms of delivering the service.

Scorebuddy can be used to standardize to one way of capturing quality, regardless of the ecosystem being used by the client. Capturing all the data in Scorebuddy is invaluable.

Voxpro uses Sisense as an analytical tool, so they are able to use Scorebuddy’s API to automatically feed Sisense with the data to do more sophisticated analyses, when and if needed. The dashboards were taken to the next level in this way.

In conclusion, if you are getting 10’s of thousands of data points, from your agents, you can analyze the data quickly and easily to move the needle with little or no cost to the Outsourcer.


Eventually, we evolved to a behavior model for the quality team. We asked ourselves, what is it that drives CSAT and NPS, what are the drivers of productivity; and what looks healthy in terms of those behaviors driving the best outcomes. 

We wanted to improve productivity on a client contract and of course we started to focus on the people with lowest productivity to raise them; but someone else, like top performers, inevitably suffer because they get less attention. We felt that wasn’t sustainable.

Most of the phone calls came in between 12pm-4pm the time that shifts also changed. And we found that was the same time that the quality leads and team leads were roaming around to have one on one coaching sessions and team meetings. We instructed them to let the agents do their work during those hours and hold your team meetings outside of those hours.


Productivity rose by 20%

We have integrated that model from self-assessment to behavior to automation into one process with Scorebuddy as an integral part of that process.

1Root cause analysis is a unique feature of Scorebuddy’s scorecard solution.