If you measure support case deflection, how do you measure it?

Hello support community,

If your community provides customer support, how do you measure deflection (community-answered versus customer support-answered) with the available data provided in Reporting?

Thank you,

Bob.

Parents
  • Hi  ,

    This is a great question, and an important one to level-set expectations around.

    With out-of-the-box Verint reporting, deflection is not a single, automated metric. All of the common approaches to measuring deflection require some level of manual analysis or interpretation of standard reports.

    Key Point Up Front

    Verint provides the data, but not a ready-made “deflection” report. Most teams calculate deflection by reviewing and combining multiple OOTB reports outside the platform (or manually interpreting them).

    How Teams Typically Do This with OOTB Reports

    1. Community-Answered vs. Staff-Answered Threads (Manual Review)

    • Use accepted/verified answer reports (forum details)

    • Review answer authorship by applying / hiding site roles (customer vs. employee/moderator)

    • Manually classify results as:

      • Community-resolved = potential deflection

      • Staff-resolved = assisted support

    There’s no automatic segmentation—this usually requires filtering, exporting, or reviewing report outputs.


    2. Accepted Answers + Views (Manual Correlation)

    • Pull reports for:

      • Solved threads

      • Thread views

    • Manually correlate the two to estimate potential deflection

    This is often summarized externally as:

    “Threads solved by the community received X views, representing potential case avoidance.”


    3. Self-Service Consumption Metrics

    • Content and thread view reports are available OOTB

    • Interpreting those views as deflection requires manual framing, not native attribution


    4. First Response Analysis

    • Verint shows all responses (timing and users who posted the reponse), so you should be able to identify first response timing and author

    • Determining whether it was peer-led vs. staff-led again requires manual filtering or review


    Important Limitation to Call Out

    Without integrations or custom analytics:

    • Verint does not natively confirm whether a support case was avoided

    • Deflection metrics are directional and inferred, not definitive

    Most organizations explicitly document this in their reporting methodology.


    How Many Teams Position This Internally

    “Using standard Verint reporting, we estimate deflection based on community-resolved threads and self-service content consumption. These insights require manual review but provide a consistent directional view of support impact.”

    This tends to set the right expectations while still demonstrating value.

  • Following up on this Sara, lots of GREAT info!

    So, inferring on what your shared, we are looking at:

    • Direct Deflections - Answered Questions
      • "Verified" Answer - 1-1 ratio, however, we want to break this down into:
        • Support answered
        • Community answered
      • "Suggested" Answer - 1-"#" ratio, as this question "may be" answered and/or in transition to being verified. Likewise, these can be broken down into: (I wonder what percentage others use?) 
        • Support suggested
        • Community suggested
    • Indirect Deflections - Answered Questions (GA4 - Engaged Sessions (Views))
      • "Verified" Answer - 1-"#" ratio. We want to be conservative. Perhaps 20%... I wonder what percentage others use? 
      • "Suggested" Answer - 1-"#" ratio. We would be much more conservative with this number. Perhaps 10%... Likewise, I wonder what percentage others use?

    I am able to pull some of this data and can tease out and calculate other numbers from what is provided. (We are using PowerBI.)

    Is there anything I'm missing here and/or am I completely off base? Disappointed

Reply
  • Following up on this Sara, lots of GREAT info!

    So, inferring on what your shared, we are looking at:

    • Direct Deflections - Answered Questions
      • "Verified" Answer - 1-1 ratio, however, we want to break this down into:
        • Support answered
        • Community answered
      • "Suggested" Answer - 1-"#" ratio, as this question "may be" answered and/or in transition to being verified. Likewise, these can be broken down into: (I wonder what percentage others use?) 
        • Support suggested
        • Community suggested
    • Indirect Deflections - Answered Questions (GA4 - Engaged Sessions (Views))
      • "Verified" Answer - 1-"#" ratio. We want to be conservative. Perhaps 20%... I wonder what percentage others use? 
      • "Suggested" Answer - 1-"#" ratio. We would be much more conservative with this number. Perhaps 10%... Likewise, I wonder what percentage others use?

    I am able to pull some of this data and can tease out and calculate other numbers from what is provided. (We are using PowerBI.)

    Is there anything I'm missing here and/or am I completely off base? Disappointed

Children
No Data