Hello support community,
If your community provides customer support, how do you measure deflection (community-answered versus customer support-answered) with the available data provided in Reporting?
Thank you,
Bob.
Hello support community,
If your community provides customer support, how do you measure deflection (community-answered versus customer support-answered) with the available data provided in Reporting?
Thank you,
Bob.
Hi rfritz ,
This is a great question, and an important one to level-set expectations around.
With out-of-the-box Verint reporting, deflection is not a single, automated metric. All of the common approaches to measuring deflection require some level of manual analysis or interpretation of standard reports.
Verint provides the data, but not a ready-made “deflection” report. Most teams calculate deflection by reviewing and combining multiple OOTB reports outside the platform (or manually interpreting them).
1. Community-Answered vs. Staff-Answered Threads (Manual Review)
Use accepted/verified answer reports (forum details)
Review answer authorship by applying / hiding site roles (customer vs. employee/moderator)
Manually classify results as:
Community-resolved = potential deflection
Staff-resolved = assisted support
There’s no automatic segmentation—this usually requires filtering, exporting, or reviewing report outputs.
2. Accepted Answers + Views (Manual Correlation)
Pull reports for:
Solved threads
Thread views
Manually correlate the two to estimate potential deflection
This is often summarized externally as:
“Threads solved by the community received X views, representing potential case avoidance.”
3. Self-Service Consumption Metrics
Content and thread view reports are available OOTB
Interpreting those views as deflection requires manual framing, not native attribution
4. First Response Analysis
Verint shows all responses (timing and users who posted the reponse), so you should be able to identify first response timing and author
Determining whether it was peer-led vs. staff-led again requires manual filtering or review
Without integrations or custom analytics:
Verint does not natively confirm whether a support case was avoided
Deflection metrics are directional and inferred, not definitive
Most organizations explicitly document this in their reporting methodology.
“Using standard Verint reporting, we estimate deflection based on community-resolved threads and self-service content consumption. These insights require manual review but provide a consistent directional view of support impact.”
This tends to set the right expectations while still demonstrating value.
Hi rfritz ,
This is a great question, and an important one to level-set expectations around.
With out-of-the-box Verint reporting, deflection is not a single, automated metric. All of the common approaches to measuring deflection require some level of manual analysis or interpretation of standard reports.
Verint provides the data, but not a ready-made “deflection” report. Most teams calculate deflection by reviewing and combining multiple OOTB reports outside the platform (or manually interpreting them).
1. Community-Answered vs. Staff-Answered Threads (Manual Review)
Use accepted/verified answer reports (forum details)
Review answer authorship by applying / hiding site roles (customer vs. employee/moderator)
Manually classify results as:
Community-resolved = potential deflection
Staff-resolved = assisted support
There’s no automatic segmentation—this usually requires filtering, exporting, or reviewing report outputs.
2. Accepted Answers + Views (Manual Correlation)
Pull reports for:
Solved threads
Thread views
Manually correlate the two to estimate potential deflection
This is often summarized externally as:
“Threads solved by the community received X views, representing potential case avoidance.”
3. Self-Service Consumption Metrics
Content and thread view reports are available OOTB
Interpreting those views as deflection requires manual framing, not native attribution
4. First Response Analysis
Verint shows all responses (timing and users who posted the reponse), so you should be able to identify first response timing and author
Determining whether it was peer-led vs. staff-led again requires manual filtering or review
Without integrations or custom analytics:
Verint does not natively confirm whether a support case was avoided
Deflection metrics are directional and inferred, not definitive
Most organizations explicitly document this in their reporting methodology.
“Using standard Verint reporting, we estimate deflection based on community-resolved threads and self-service content consumption. These insights require manual review but provide a consistent directional view of support impact.”
This tends to set the right expectations while still demonstrating value.
Thanks so much for your response!
It seems we should also incorporate using GA for engaged sessions to help identify indirect deflections as it would provide a better metric for views of answers for more than 10-15 to 30 seconds to read the answer.
Following up on this Sara, lots of GREAT info!
So, inferring on what your shared, we are looking at:
I am able to pull some of this data and can tease out and calculate other numbers from what is provided. (We are using PowerBI.)
Is there anything I'm missing here and/or am I completely off base?