Receiving notifications about moderation

Former Member
Former Member

 How does moderation and abuse work? - User Documentation - Verint Community 11.x - Telligent Community

The above article tells me that a moderator will be notified when there is content to review. I tested this by adding my own account to the moderator role, then impersonating another user, and reporting a user as abusive. In this case, I received a notification. Great.

But, when I tried it a different way and impersonated users to report CONTENT as abusive, I did NOT receive a notification. Instead, I went into Administration/Moderation Queue and saw the report under "In Process".

What do I need to do to ensure that I will receive a notification (email notification preferred!) every time any person or content ends up in ANY of the 3 tabs in Moderation Queue? What I have missed here?

Parents Reply
  • Former Member
    0 Former Member in reply to Patrick M.

    They are not confirmed as abusive, so I guess that's the issue. But that begs another question: how many times does a user need to be reported as abusive to be confirmed as abusive? Is that a setting that I have control over? I've looked and looked and to my eye, the threshold has to do with abusive CONTENT not abusive USERS.

Children
  • Somewhat related to this thread is that moderators are only notified of content reported as abusive only after the author of the content makes an appeal.  I would think that a moderator should be plugged in to always review anything reported.  It seems the use case of this feature was built for communities where SPAM or sensitive posts happen all the time so they don't want to burden the moderators with notifications for things that can be auto-detected and deleted.  However, the flip side of that is communities with infrequent SPAM where the moderators should be alerted immediately for anything that goes to a moderation queue they are responsible for.  I just wanted to clarify that for anyone else who comes across this post.

  • You may want to identify your moderation-heavy abuse workflow in Ideas and Improvements to be considered for a future update as an option.

    The current abuse workflow is designed for public communities where SPAM can be created more easily and it puts the pressure on the content author to start the review process to avoid overloading moderators with SPAM content created by bots. It also provides an automated route (via notifications from the community) to communicate with authors of valid content to identify how/why their content was identified and allow them to explain the situation vs. a moderator making a blind decision.