Refresh Your Quality Monitoring Program with these 15 Best Practices

  1. Include agents in identifying what to measure.

    If your quality scorecards have been around a while, it's probably time to give them a tune up. This is the perfect opportunity to get input from the agents that interact with your customers every day and have a good handle on what satisfies them. Having been on the receiving end of quality scorecards, they probably already have some ideas about how to improve them. Including agents in this process will not only yield a better scorecard, but it will also help with buy-in.

  2. Make quality monitoring positive rather than negative.

    If agents have a look of dread whenever it's time to receive coaching on their quality results, there's something wrong with the approach. Agents need to be comfortable with receiving a lot of feedback - it's just part of the job description. But they should understand it's to make them even better, not punish them. After all, even world class athletes benefit from coaching. Build trust with agents so they're receptive to feedback and make sure you’re focusing on positives as well as negatives. Quality analytics makes it easy to find successful interactions by pinpointing positive sentiment. The quality monitoring plan should include steps to review some positive interactions for each agent every cycle. See best practice #7 for more information about analytics.

  3. Implement a dispute resolution process.

    Related to #2, agents will have more trust for the quality monitoring process if they are able to dispute the results. This will foster dialog and possibly help calibrate evaluations. Empowering agents by letting them share opinions that will possibly result in a scoring upgrade will lead to better engagement and buy-in. Quality monitoring shouldn't be something that is "done to" agents. They should be active participants.

  4. Provide incentives.

    If quality is truly important to your contact center, put some money behind it. That not only sends a clear message that quality is important, but it will also create excitement and focus among agents. Consider rewarding the individual or team with the highest average quality monitoring score each month. If money is tight, awards like a reserved parking space or choice of shifts can be just as meaningful.

  5. Make quality scores very visible.

    To keep quality monitoring scores top of mind, make sure everyone can see them. They should be part of agent dashboards and available on agent desktops so agents can self-manage their results. Plus, the overall call center average should be posted in a prominent place, perhaps near the employee entrance or on a wall of the contact center floor.

  6. Use industry-leading quality management software.

    The best quality management software automates many of the labor-intensive quality monitoring tasks, relieving evaluators and agents of administrative burden. For example, quality management systems can choose and route call recordings to evaluators based on user defined criteria such as length of call and ACD disposition. Additionally, QM tools can automate the dispute resolution process discussed in #3 above. And modern quality management software can also record agent screens, providing evaluators with a more holistic view of the interaction.

  7. Monitor 100% of interactions with quality management analytics.

    A drawback of traditional quality monitoring is that only a very small sample of interactions are reviewed. Is that 1-2% sample really representative of the contact center's or an agent's entire body of work? Quality management analytics uses  AI-powered analytics to pinpoint interactions with specific categories, words or phrases and routes them for evaluation. That way you can ensure you are evaluating a more representative sample of agent performance – both great and subpar – across different interaction types.

  8. Include all support channels. If you support digital channels like chat and email, those should also be included in your quality monitoring efforts. Otherwise, you're getting only part of the quality story. Supporting digital channels requires distinct skill sets, so just because an agent is competent at voice interactions doesn't mean he'll have a comparable chat quality score. Taking this up a level, just because your contact center's phone quality is high doesn't mean your chat quality is also high. Quality scores can also be aggregated at the channel level so you can compare results across all the channels you provide.
  9. Facilitate self-evaluations.

    Agents should regularly have the chance to evaluate their own interactions. Ideally, these would include ones the evaluators are also assessing so agents and evaluators can have meaningful conversations about the "why" behind the scores. This will align expectations and give agents a better understanding and appreciation of the process.

  10. Consider other metrics.

    If you expand the definition of "quality monitoring," it could mean keeping an eye on all statistics that impact quality. For example, a holistic quality dashboard would include stats like customer satisfaction scores, transfer rates, and first call resolution rates in addition to quality monitoring scores. Quality scores are certainly important, but they're more powerful when used in conjunction with other metrics.

  11. Don't just focus on outliers.

    If the goal of your quality monitoring program is to develop a directionally accurate quality score, focusing solely on outliers, like short or long calls, won't get you there. The bulk of the sample should be randomly selected and ideally be a representative slice of the overall pie. Someone needs to investigate outliers like repeatedly short calls or excessively long calls to identify system or behavioral issues, but do they need to be formally evaluated? Probably not.

  12. Show agents what they're striving for.

    A call recording (or chat transcript, etc.) can be worth a thousand words. Quality service shouldn't be some vague concept represented by questions on a scorecard. Agents need to know exactly what "excellent" looks and sounds like. Sharing examples of highly scored interactions will help clarify expectations.

  13. Tailor scorecards when needed but don't go overboard.

    Unless you have a very narrowly focused operation, you'll find that when it comes to quality scorecards, one size does not fit all – and like many things, less is more. Different call types often require their own scorecards due to the nature of the interaction. For example, a tech support call has different quality measures than an outbound sales call. Create unique scorecards when needed but try not to create so many that it becomes an administrative mess.

  14. Incorporate live listening.

    Quality monitoring doesn't have to be limited to "after-the-fact" reviews of call recordings and digital transcripts. Listening and coaching while calls are happening can be a powerful way for agents to learn good CX lessons. The best call center software includes features that allow supervisors to see the agent’s screen and listen and whisper coach advice to agents in real-time for on-the-spot course corrections.

  15. Keep it simple.

As best practice #13 implies, contact centers should strive for simplicity in their quality monitoring. This not only applies to the number of different scorecards they have, but also the contents of these scorecards. It's tempting to include a line item about every little step the organization wants their agents to follow, but overengineering scorecards can render them useless. Keep scorecard questions to a handful of the most important drivers of quality.