Receiving notifications about moderation

Former Member
Former Member

 How does moderation and abuse work? - User Documentation - Verint Community 11.x - Telligent Community

The above article tells me that a moderator will be notified when there is content to review. I tested this by adding my own account to the moderator role, then impersonating another user, and reporting a user as abusive. In this case, I received a notification. Great.

But, when I tried it a different way and impersonated users to report CONTENT as abusive, I did NOT receive a notification. Instead, I went into Administration/Moderation Queue and saw the report under "In Process".

What do I need to do to ensure that I will receive a notification (email notification preferred!) every time any person or content ends up in ANY of the 3 tabs in Moderation Queue? What I have missed here?

  • First understand moderation and abuse are not the same thing.   Moderation means you are forcing content to be reviewed so once it is moderated you will be notified.

    Abuse on the other hand is different.  The system will try and decipher abuse on its own using the abuse automation plugins or the logic on it being flagged.   Once a plugin determines its abusive or it receives the amount of flags to confirm it, it is removed from the site and the content author is allowed to appeal.   If the author appeals then you will get notified to review the appeal, otherwise it assumes since no one complained, it was abusive.

  • Former Member
    0 Former Member in reply to Patrick M.

    I hear you about the difference between moderation and abuse. Noted.

    And I understand about the abuse automation. Noted.

    But what about abuse as reported by other users? I received a notification when a USER was reported as abusive. Is there a way to receive a notification if CONTENT is reported as abusive (or, reported as abusive at least twice, which is what we have set our threshold as)?

  • Are you getting the notification as the user being claimed abusive?  As in it is going to the actual account?  We do notify a user if their account is abusive.

    For content, you won't be notified unless the author appeals it, then you will be notified and allowed to decide.  Otherwise no appeal assumes a positive abuse detection.

  • Former Member
    0 Former Member in reply to Patrick M.

    OK, I did some more testing on this based on your response. Something is not working properly, and I don't know if it's a true glitch or just a setting that I have missed. I suspect the latter. Here's what happened:

    1. "Abusive User's Content is Abusive" is ENABLED.
    2. A and B mark C (the USER) as abusive.
    3. C receives no notifications about this.
    4. Moderator receives no notifications about this.
    5. C posts content.
    6. Neither C (the USER) or C's content is seen in abuse workflow.

    The above directly contradicts what this resource tells me should happen: https://community.telligent.com/community/11/w/user-documentation/63064/how-does-moderation-and-abuse-work

    WHY?

  • Are you sure the actual user has been confirmed as abusive?  They should be on the third tab(In Process)

    Also the user notification is different than a normal notification, you will have to be able to get/see email for that user

  • Former Member
    0 Former Member in reply to Patrick M.

    They are not confirmed as abusive, so I guess that's the issue. But that begs another question: how many times does a user need to be reported as abusive to be confirmed as abusive? Is that a setting that I have control over? I've looked and looked and to my eye, the threshold has to do with abusive CONTENT not abusive USERS.

  • Somewhat related to this thread is that moderators are only notified of content reported as abusive only after the author of the content makes an appeal.  I would think that a moderator should be plugged in to always review anything reported.  It seems the use case of this feature was built for communities where SPAM or sensitive posts happen all the time so they don't want to burden the moderators with notifications for things that can be auto-detected and deleted.  However, the flip side of that is communities with infrequent SPAM where the moderators should be alerted immediately for anything that goes to a moderation queue they are responsible for.  I just wanted to clarify that for anyone else who comes across this post.

  • You may want to identify your moderation-heavy abuse workflow in Ideas and Improvements to be considered for a future update as an option.

    The current abuse workflow is designed for public communities where SPAM can be created more easily and it puts the pressure on the content author to start the review process to avoid overloading moderators with SPAM content created by bots. It also provides an automated route (via notifications from the community) to communicate with authors of valid content to identify how/why their content was identified and allow them to explain the situation vs. a moderator making a blind decision.