Telligent Community Server has a redesigned moderation, spam, and abuse system. Moderation and spam have been combined into the abuse workflow to:
- Preemptively moderate content and comments.
- Detect spam when the content is posted.
- Review and/or remove abusive content after it has been published.
In the abuse workflow, content can be moderated, reported, appealed, reviewed, hidden, or removed.
The purview of moderation has been extended to all posts, comments, and content. (In previous releases, it was available only to blog, forum and media gallery posts.) Note that when a user is moderated, all of their content is moderated.
Content is moderated through the Abuse Management interface. The previous moderation panels have been removed.
Users who are accessing abusive or moderated content can see the content is under review, but does exist. When searching for the content, there is a message that the content is under review. When navigating to the page, there is a message that the content is currently unavailable because it is pending review.
Content that is identified as spam enters into the abuse workflow as abusive content and is managed in the same way - reported, appealed, reviewed, and upheld or hidden.
Users can be marked as spammy. When a user is suspected of being abusive, no change is made to the user - they are still able to log in and post. However, a rule ensures that their content is automatically flagged as abusive. The user can appeal their abusive status at any time. If they don't appeal or if their appeal is denied, the user account is deleted when they are confirmed as being abusive. There is the option to completely remove their content along with their account.
The former spam management user interface has been removed, and any spam references have been removed from communityserver.config. New plugins are used to deal with abuse (including spam). The previously existing spam rules were migrated to these plugins, which allow you to specify what content types should be excluded from checks.
Any content or comment/reply can be flagged as abusive, and it then enters the abuse workflow. Content that has been reported but has not yet reached the abuse minimum threshold number of reports can be viewed and moderated. Once the number of reports hits a configurable minimum threshold, the content hits the appeal queue process. In the appeal queue, those with appropriate permission can review the content and decide whether to hide the content.
If content is reported enough times and found abusive, or if the author's posts are picked up by the abuse filter plugins, he or she is notified by email and given the chance to appeal the decision. If he/she doesn't contest the finding, or if the appeal is not upheld by the review board, the content is then hidden from the community. If enough of a user's content is found abusive, the user himself can be found abusive and removed from the site along with his content.
The system uses the following plugin types to detect and enforce abuse rules:
- Email address counts - Detects abuse/spam by reviewing the content for excessive email addresses.
- Forbidden word counts - Detects abuse/spam by reviewing the content for excessive use of forbidden words.
- Link counts - Detects abuse/spam by reviewing the content for excessive links.
- Posting frequency by authenticated users - Detects abuse/spam by reviewing posting frequency by users over a specific duration of time.
- Posting frequency by IP address - Detects abuse/spam by reviewing posting frequency by IP address over a specific duration of time.
- Abusive user detection by abuse frequency - Identifies users as abusive based on the frequency that their content is identified as abusive.
- Abusive user's content is abusive - When a user is identified as abusive, flag all new content by that author as abusive.
- Akismet - Detects abuse/spam by sending details about new content to the Akismet service for SPAM review.
- Authenticated user content posting frequency - Detects abuse/spam by reviewing similar content posted by an authenticated user within a specific duration of time.
- Moderate content by moderated users - Automatically moderates content created by moderated users.
Most abuse type plugins have a tab that allows you to exclude specific types of content from review.
For developers, moderation support can be added to an application and be part of the platform moderation.