<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>How does moderation and abuse work?</title><link>https://community.telligent.com/community/11/w/user-documentation/63064/how-does-moderation-and-abuse-work</link><description /><dc:language>en-US</dc:language><generator>14.0.0.586 14</generator><item><title>How does moderation and abuse work?</title><link>https://community.telligent.com/community/11/w/user-documentation/63064/how-does-moderation-and-abuse-work</link><pubDate>Tue, 19 Oct 2021 20:32:12 GMT</pubDate><guid isPermaLink="false">4ae8cf60-3c17-4ec0-b1a8-258e0c1eb7f1</guid><dc:creator>Tom Paolucci</dc:creator><comments>https://community.telligent.com/community/11/w/user-documentation/63064/how-does-moderation-and-abuse-work#comments</comments><description>Current Revision posted to User Documentation by Tom Paolucci on 10/19/2021 20:32:12&lt;br /&gt;
&lt;p&gt;Moderation, SPAM prevention, and abuse are all part of a workflow to prevent inappropriate content from being shown in the community. In general, the content creation workflow is:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/1731.7725.Moderation-and-Abuse.png" /&gt;&lt;/p&gt;
&lt;p&gt;When content is created or edited, it is reviewed by automated abuse detection rules. If the rules find the content to be abusive (SPAM, for example), the content is immediately flagged as abusive, hidden from the community, and the content enters the abuse workflow.&lt;/p&gt;
&lt;p&gt;If the automatic abuse detectors didn&amp;#39;t identify the content as abusive/SPAM, the author is reviewed to detect if they should be moderated. Authors may have their account set to moderate all of their content or the application into which they created content may identify that all content should be moderated within it. If the content from the author should be moderated, it will enter the moderation workflow.&lt;/p&gt;
&lt;p&gt;If the content is not moderated and not automatically detected as abusive, it is visible in the community. Members may view it and consider it inappropriate or abusive. They can then flag the content as abusive. If enough votes are received with respect to the author&amp;#39;s reputation, the content is hidden from the community and the content enters the abuse workflow.&lt;/p&gt;
&lt;p&gt;[toc]&lt;/p&gt;
&lt;h2&gt;&lt;a id="Moderation_Workflow" name="Moderation_Workflow"&gt;&lt;/a&gt;Moderation Workflow&lt;/h2&gt;
&lt;p&gt;When content enters the moderation workflow, it is listed within the &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Queue&lt;/strong&gt; on the &lt;strong&gt;Awaiting Review&lt;/strong&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/2728.Screen-Shot-2017_2D00_09_2D00_27-at-4.42.00-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;The process for reviewing moderated content is:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/3438.2746.Moderation.png" /&gt;&lt;/p&gt;
&lt;p&gt;The moderator will be notified that there is content to review. When they review content in the &lt;strong&gt;Moderation Queue&lt;/strong&gt;, they can approve or deny the content. If the content is approved, it is immediately shown in the community and the author is notified that their content is available.&lt;/p&gt;
&lt;p&gt;If the moderator considers the content abusive and denies the moderated content, the content enters the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Automatic_Abuse_Detection" name="Automatic_Abuse_Detection"&gt;&lt;/a&gt;Automatic Abuse Detection&lt;/h2&gt;
&lt;p&gt;When content is created or edited, it is evaluated using automatic abuse detectors which are configurable rules that can review details of the content and its author to determine if it is likely abusive. Automatic abuse detection rules are listed in &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; ABUSE AUTOMATION&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/5531.Screen-Shot-2017_2D00_09_2D00_27-at-4.15.36-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Each automation rule can be enabled/disabled and configured to meet the needs of the community. Most automation rules enable specifying which types of content they review via their &lt;strong&gt;Content to Review&lt;/strong&gt; tab.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If any abuse automation rule considers newly created or edited content as abusive, it immediately enters the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Manual_Abuse_Detection" name="Manual_Abuse_Detection"&gt;&lt;/a&gt;Manual Abuse Detection&lt;/h2&gt;
&lt;p&gt;If content is not moderated or automatically determined to be abusive, it is published to the community. At that time, members can view the content. If members consider the content to be inappropriate or abusive, they can flag the content as abusive:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/3005.Screen-Shot-2017_2D00_09_2D00_27-at-4.19.09-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;The flagging as abuse option is generally shown within the &lt;strong&gt;More&lt;/strong&gt; menu associated to the content. Each member can flag a piece of content once.&lt;/p&gt;
&lt;p&gt;If a moderator flags the content as abusive, it immediately enters the abuse workflow. Otherwise, if non-moderators flag the content, the reputation of the members flagging the content, weighted by the number of votes is compared to the reputation of the author of the content. If the votes outweigh the reputation of the author, the content will enter the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Abuse_Workflow" name="Abuse_Workflow"&gt;&lt;/a&gt;Abuse Workflow&lt;/h2&gt;
&lt;p&gt;All content that enters the abuse workflow is hidden, except members. If a member is flagged as abusive, they are still visible within the community, but the &lt;strong&gt;Abusive User&amp;#39;s Content Is Abusive&lt;/strong&gt; automation rule will ensure that any content they create will immediately enter the abuse workflow. This automation rule is not retroactive, it will only review content created after the member has been flagged. Abusive content is shown within the &lt;strong&gt;Moderation Queue&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/1643.Screen-Shot-2017_2D00_09_2D00_27-at-4.45.18-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Abusive content follows this workflow:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/8512.3443.Abuse.png" /&gt;&lt;/p&gt;
&lt;p&gt;When content enters the abuse workflow, it is hidden and the author is notified that the content is consider abusive. The notification includes a link to appeal that designation.&lt;/p&gt;
&lt;p&gt;If the author chooses not to appeal, the content will expire and be flagged for expungement.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If the author appeals the abusive designation, the content with the appeal will show on the &lt;strong&gt;Awaiting Review&lt;/strong&gt; tab of &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Queue&lt;/strong&gt; and moderators that can review the content will be notified that there is content to review.&lt;/p&gt;
&lt;p&gt;The moderator can then review the content and appeal and either approve or deny the appeal. If the content is approved, it will be shown the community again. If the content is denied, the author is notified that the appeal has been denied and the content is flagged for expungement.&lt;/p&gt;
&lt;p&gt;When content is flagged for expungement, it waits for a configurable period of time and is then deleted completely from the community.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Moderation_and_Abuse_Configuration" name="Moderation_and_Abuse_Configuration"&gt;&lt;/a&gt;Moderation and Abuse Configuration&lt;/h2&gt;
&lt;p&gt;In addition to the configuration of abuse automation rules, the &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Options&lt;/strong&gt; panel includes additional configuration options that govern the moderation and abuse workflows:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/6038.Screen-Shot-2017_2D00_09_2D00_27-at-4.28.53-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Here, you can specify:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Possibly Abusive Threshold&lt;/strong&gt;: The minimum number of abuse reports to be reviewed for potential abuse. Once content receives this minimum number of votes, the reputation scores of the users who flagged the content is considered when determining if the&amp;nbsp;content enters the appeal process.&amp;nbsp;Note that if a member who can moderate the content flags content as abusive, it is immediately identified as abusive and is not subject to this minimum.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Definitely Abusive Threshold&lt;/strong&gt;: The maximum number of abuse reports allowed for a piece of content before it is always flagged as abusive. If any piece of content has this many votes, it is always identified as abusive regardless of the reputation of the reporters and author.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Appeal Time Window&lt;/strong&gt;: The number of days an author has to appeal abusive content before it is expunged (scheduled for deletion).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Moderate Time Window&lt;/strong&gt;: The number of days a moderator has to review moderated content before it is expunged (scheduled for deletion).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expunge Time Window&lt;/strong&gt;: The number of days that expunged content waits before it is deleted.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Exempt Authors from Abuse Automation Minimum Reputation Percentile&lt;/strong&gt;: The minimum top percentile that is exempt from abuse automation processing. If a member has a reputation in the top 5%, for example (based on the screenshot above), their content would not be reviewed by abuse automation. Authors who can review abuse where the content is created as also exempt from abuse automation.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If any changes are made, be sure to click &lt;strong&gt;Save&lt;/strong&gt; to commit the changes.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Reviewing_Content_in_the_Abuse_and_Moderation_Workflow" name="Reviewing_Content_in_the_Abuse_and_Moderation_Workflow"&gt;&lt;/a&gt;Reviewing Content in the Abuse and Moderation Workflow&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;Moderation Queue&lt;/strong&gt; (in &lt;strong&gt;Administration &amp;gt; Moderation&lt;/strong&gt;) can be used to review content that is not yet identified as abusive (but has abuse reports) and content anywhere within the abuse process.&lt;/p&gt;
&lt;p&gt;To view content that has been flagged as abusive but has not yet been determined to be reasonably abusive, review the &lt;strong&gt;Possibly Abusive&lt;/strong&gt; tab. Here, you can proactively ignore or deny content before it formally makes it into the abuse workflow.&lt;/p&gt;
&lt;p&gt;To review content that is elsewhere in the abuse workflow, view the &lt;strong&gt;In Process&lt;/strong&gt; tab. Here, you can review content based on its current status within the workflow and filter to a specific group, application, or user. This tab can also be used to correct an error or review abusive content that has not yet been appealed.&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>How does moderation and abuse work?</title><link>https://community.telligent.com/community/11/w/user-documentation/63064/how-does-moderation-and-abuse-work/revision/2</link><pubDate>Thu, 18 Jul 2019 17:12:04 GMT</pubDate><guid isPermaLink="false">4ae8cf60-3c17-4ec0-b1a8-258e0c1eb7f1</guid><dc:creator>Grant Pankonien</dc:creator><comments>https://community.telligent.com/community/11/w/user-documentation/63064/how-does-moderation-and-abuse-work#comments</comments><description>Revision 2 posted to User Documentation by Grant Pankonien on 07/18/2019 17:12:04&lt;br /&gt;
&lt;p&gt;Moderation, SPAM prevention, and abuse are all part of a workflow to prevent inappropriate content from being shown in the community. In general, the content creation workflow is:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/1731.7725.Moderation-and-Abuse.png" /&gt;&lt;/p&gt;
&lt;p&gt;When content is created or edited, it is reviewed by automated abuse detection rules. If the rules find the content to be abusive (SPAM, for example), the content is immediately flagged as abusive, hidden from the community, and the content enters the abuse workflow.&lt;/p&gt;
&lt;p&gt;If the automatic abuse detectors didn&amp;#39;t identify the content as abusive/SPAM, the author is reviewed to detect if they should be moderated. Authors may have their account set to moderate all of their content or the application into which they created content may identify that all content should be moderated within it. If the content from the author should be moderated, it will enter the moderation workflow.&lt;/p&gt;
&lt;p&gt;If the content is not moderated and not automatically detected as abusive, it is visible in the community. Members may view it and consider it inappropriate or abusive. They can then flag the content as abusive. If enough votes are received with respect to the author&amp;#39;s reputation, the content is hidden from the community and the content enters the abuse workflow.&lt;/p&gt;
&lt;p&gt;[toc]&lt;/p&gt;
&lt;h2&gt;&lt;a id="Moderation_Workflow" name="Moderation_Workflow"&gt;&lt;/a&gt;Moderation Workflow&lt;/h2&gt;
&lt;p&gt;When content enters the moderation workflow, it is listed within the &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Queue&lt;/strong&gt; on the &lt;strong&gt;Awaiting Review&lt;/strong&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/2728.Screen-Shot-2017_2D00_09_2D00_27-at-4.42.00-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;The process for reviewing moderated content is:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/3438.2746.Moderation.png" /&gt;&lt;/p&gt;
&lt;p&gt;The moderator will be notified that there is content to review. When they review content in the &lt;strong&gt;Moderation Queue&lt;/strong&gt;, they can approve or deny the content. If the content is approved, it is immediately shown in the community and the author is notified that their content is available.&lt;/p&gt;
&lt;p&gt;If the moderator considers the content abusive and denies the moderated content, the content enters the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Automatic_Abuse_Detection" name="Automatic_Abuse_Detection"&gt;&lt;/a&gt;Automatic Abuse Detection&lt;/h2&gt;
&lt;p&gt;When content is created or edited, it is evaluated using automatic abuse detectors which are configurable rules that can review details of the content and its author to determine if it is likely abusive. Automatic abuse detection rules are listed in &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; ABUSE AUTOMATION&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/5531.Screen-Shot-2017_2D00_09_2D00_27-at-4.15.36-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Each automation rule can be enabled/disabled and configured to meet the needs of the community. Most automation rules enable specifying which types of content they review via their &lt;strong&gt;Content to Review&lt;/strong&gt; tab.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If any abuse automation rule considers newly created or edited content as abusive, it immediately enters the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Manual_Abuse_Detection" name="Manual_Abuse_Detection"&gt;&lt;/a&gt;Manual Abuse Detection&lt;/h2&gt;
&lt;p&gt;If content is not moderated or automatically determined to be abusive, it is published to the community. At that time, members can view the content. If members consider the content to be inappropriate or abusive, they can flag the content as abusive:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/3005.Screen-Shot-2017_2D00_09_2D00_27-at-4.19.09-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;The flagging as abuse option is generally shown within the &lt;strong&gt;More&lt;/strong&gt; menu associated to the content. Each member can flag a piece of content once.&lt;/p&gt;
&lt;p&gt;If a moderator flags the content as abusive, it immediately enters the abuse workflow. Otherwise, if non-moderators flag the content, the reputation of the members flagging the content, weighted by the number of votes is compared to the reputation of the author of the content. If the votes outweigh the reputation of the author, the content will enter the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Abuse_Workflow" name="Abuse_Workflow"&gt;&lt;/a&gt;Abuse Workflow&lt;/h2&gt;
&lt;p&gt;All content that enters the abuse workflow is hidden, except members. If a member is flagged as abusive, they are still visible within the community, but the &lt;strong&gt;Abusive User&amp;#39;s Content Is Abusive&lt;/strong&gt; automation rule will ensure that any content they create will immediately enter the abuse workflow. Abusive content is shown within the &lt;strong&gt;Moderation Queue&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/1643.Screen-Shot-2017_2D00_09_2D00_27-at-4.45.18-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Abusive content follows this workflow:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/8512.3443.Abuse.png" /&gt;&lt;/p&gt;
&lt;p&gt;When content enters the abuse workflow, it is hidden and the author is notified that the content is consider abusive. The notification includes a link to appeal that designation.&lt;/p&gt;
&lt;p&gt;If the author chooses not to appeal, the content will expire and be flagged for expungement.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If the author appeals the abusive designation, the content with the appeal will show on the &lt;strong&gt;Awaiting Review&lt;/strong&gt; tab of &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Queue&lt;/strong&gt; and moderators that can review the content will be notified that there is content to review.&lt;/p&gt;
&lt;p&gt;The moderator can then review the content and appeal and either approve or deny the appeal. If the content is approved, it will be shown the community again. If the content is denied, the author is notified that the appeal has been denied and the content is flagged for expungement.&lt;/p&gt;
&lt;p&gt;When content is flagged for expungement, it waits for a configurable period of time and is then deleted completely from the community.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Moderation_and_Abuse_Configuration" name="Moderation_and_Abuse_Configuration"&gt;&lt;/a&gt;Moderation and Abuse Configuration&lt;/h2&gt;
&lt;p&gt;In addition to the configuration of abuse automation rules, the &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Options&lt;/strong&gt; panel includes additional configuration options that govern the moderation and abuse workflows:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/6038.Screen-Shot-2017_2D00_09_2D00_27-at-4.28.53-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Here, you can specify:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Possibly Abusive Threshold&lt;/strong&gt;: The minimum number of abuse reports to be reviewed for potential abuse. Once content receives this minimum number of votes, the reputation scores of the users who flagged the content is considered when determining if the&amp;nbsp;content enters the appeal process.&amp;nbsp;Note that if a member who can moderate the content flags content as abusive, it is immediately identified as abusive and is not subject to this minimum.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Definitely Abusive Threshold&lt;/strong&gt;: The maximum number of abuse reports allowed for a piece of content before it is always flagged as abusive. If any piece of content has this many votes, it is always identified as abusive regardless of the reputation of the reporters and author.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Appeal Time Window&lt;/strong&gt;: The number of days an author has to appeal abusive content before it is expunged (scheduled for deletion).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Moderate Time Window&lt;/strong&gt;: The number of days a moderator has to review moderated content before it is expunged (scheduled for deletion).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expunge Time Window&lt;/strong&gt;: The number of days that expunged content waits before it is deleted.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Exempt Authors from Abuse Automation Minimum Reputation Percentile&lt;/strong&gt;: The minimum top percentile that is exempt from abuse automation processing. If a member has a reputation in the top 5%, for example (based on the screenshot above), their content would not be reviewed by abuse automation. Authors who can review abuse where the content is created as also exempt from abuse automation.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If any changes are made, be sure to click &lt;strong&gt;Save&lt;/strong&gt; to commit the changes.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Reviewing_Content_in_the_Abuse_and_Moderation_Workflow" name="Reviewing_Content_in_the_Abuse_and_Moderation_Workflow"&gt;&lt;/a&gt;Reviewing Content in the Abuse and Moderation Workflow&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;Moderation Queue&lt;/strong&gt; (in &lt;strong&gt;Administration &amp;gt; Moderation&lt;/strong&gt;) can be used to review content that is not yet identified as abusive (but has abuse reports) and content anywhere within the abuse process.&lt;/p&gt;
&lt;p&gt;To view content that has been flagged as abusive but has not yet been determined to be reasonably abusive, review the &lt;strong&gt;Possibly Abusive&lt;/strong&gt; tab. Here, you can proactively ignore or deny content before it formally makes it into the abuse workflow.&lt;/p&gt;
&lt;p&gt;To review content that is elsewhere in the abuse workflow, view the &lt;strong&gt;In Process&lt;/strong&gt; tab. Here, you can review content based on its current status within the workflow and filter to a specific group, application, or user. This tab can also be used to correct an error or review abusive content that has not yet been appealed.&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>How does moderation and abuse work?</title><link>https://community.telligent.com/community/11/w/user-documentation/63064/how-does-moderation-and-abuse-work/revision/1</link><pubDate>Tue, 04 Jun 2019 20:20:57 GMT</pubDate><guid isPermaLink="false">4ae8cf60-3c17-4ec0-b1a8-258e0c1eb7f1</guid><dc:creator>Ben Tiedt</dc:creator><comments>https://community.telligent.com/community/11/w/user-documentation/63064/how-does-moderation-and-abuse-work#comments</comments><description>Revision 1 posted to User Documentation by Ben Tiedt on 06/04/2019 20:20:57&lt;br /&gt;
&lt;p&gt;Moderation, SPAM prevention, and abuse are all part of a workflow to prevent inappropriate content from being shown in the community. In general, the content creation workflow is:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/1731.7725.Moderation-and-Abuse.png" /&gt;&lt;/p&gt;
&lt;p&gt;When content is created or edited, it is reviewed by automated abuse detection rules. If the rules find the content to be abusive (SPAM, for example), the content is immediately flagged as abusive, hidden from the community, and the content enters the abuse workflow.&lt;/p&gt;
&lt;p&gt;If the automatic abuse detectors didn&amp;#39;t identify the content as abusive/SPAM, the author is reviewed to detect if they should be moderated. Authors may have their account set to moderate all of their content or the application into which they created content may identify that all content should be moderated within it. If the content from the author should be moderated, it will enter the moderation workflow.&lt;/p&gt;
&lt;p&gt;If the content is not moderated and not automatically detected as abusive, it is visible in the community. Members may view it and consider it inappropriate or abusive. They can then flag the content as abusive. If enough votes are received with respect to the author&amp;#39;s reputation, the content is hidden from the community and the content enters the abuse workflow.&lt;/p&gt;
&lt;p&gt;[toc]&lt;/p&gt;
&lt;h2&gt;&lt;a id="Moderation_Workflow" name="Moderation_Workflow"&gt;&lt;/a&gt;Moderation Workflow&lt;/h2&gt;
&lt;p&gt;When content enters the moderation workflow, it is listed within the &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Queue&lt;/strong&gt; on the &lt;strong&gt;Awaiting Review&lt;/strong&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/2728.Screen-Shot-2017_2D00_09_2D00_27-at-4.42.00-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;The process for reviewing moderated content is:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/3438.2746.Moderation.png" /&gt;&lt;/p&gt;
&lt;p&gt;The moderator will be notified that there is content to review. When they review content in the &lt;strong&gt;Moderation Queue&lt;/strong&gt;, they can approve or deny the content. If the content is approved, it is immediately shown in the community and the author is notified that their content is available.&lt;/p&gt;
&lt;p&gt;If the moderator considers the content abusive and denies the moderated content, the content enters the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Automatic_Abuse_Detection" name="Automatic_Abuse_Detection"&gt;&lt;/a&gt;Automatic Abuse Detection&lt;/h2&gt;
&lt;p&gt;When content is created or edited, it is evaluated using automatic abuse detectors which are configurable rules that can review details of the content and its author to determine if it is likely abusive. Automatic abuse detection rules are listed in &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; ABUSE AUTOMATION&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/5531.Screen-Shot-2017_2D00_09_2D00_27-at-4.15.36-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Each automation rule can be enabled/disabled and configured to meet the needs of the community. Most automation rules enable specifying which types of content they review via their &lt;strong&gt;Content to Review&lt;/strong&gt; tab.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If any abuse automation rule considers newly created or edited content as abusive, it immediately enters the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Manual_Abuse_Detection" name="Manual_Abuse_Detection"&gt;&lt;/a&gt;Manual Abuse Detection&lt;/h2&gt;
&lt;p&gt;If content is not moderated or automatically determined to be abusive, it is published to the community. At that time, members can view the content. If members consider the content to be inappropriate or abusive, they can flag the content as abusive:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/3005.Screen-Shot-2017_2D00_09_2D00_27-at-4.19.09-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;The flagging as abuse option is generally shown within the &lt;strong&gt;More&lt;/strong&gt; menu associated to the content. Each member can flag a piece of content once.&lt;/p&gt;
&lt;p&gt;If a moderator flags the content as abusive, it immediately enters the abuse workflow. Otherwise, if non-moderators flag the content, the reputation of the members flagging the content, weighted by the number of votes is compared to the reputation of the author of the content. If the votes outweigh the reputation of the author, the content will enter the abuse workflow.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Abuse_Workflow" name="Abuse_Workflow"&gt;&lt;/a&gt;Abuse Workflow&lt;/h2&gt;
&lt;p&gt;All content that enters the abuse workflow is hidden, except members. If a member is flagged as abusive, they are still visible within the community, but the &lt;strong&gt;Abusive User&amp;#39;s Content Is Abusive&lt;/strong&gt; automation rule will ensure that any content they create will immediately enter the abuse workflow. Abusive content is shown within the &lt;strong&gt;Moderation Queue&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/1643.Screen-Shot-2017_2D00_09_2D00_27-at-4.45.18-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Abusive content follows this workflow:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/8512.3443.Abuse.png" /&gt;&lt;/p&gt;
&lt;p&gt;When content enters the abuse workflow, it is hidden and the author is notified that the content is consider abusive. The notification includes a link to appeal that designation.&lt;/p&gt;
&lt;p&gt;If the author chooses not to appeal, the content will expire and be flagged for expungement.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If the author appeals the abusive designation, the content with the appeal will show on the &lt;strong&gt;Awaiting Review&lt;/strong&gt; tab of &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Queue&lt;/strong&gt; and moderators that can review the content will be notified that there is content to review.&lt;/p&gt;
&lt;p&gt;The moderator can then review the content and appeal and either approve or deny the appeal. If the content is approved, it will be shown the community again. If the content is denied, the author is notified that the appeal has been denied and the content is flagged for expungement.&lt;/p&gt;
&lt;p&gt;When content is flagged for expungement, it waits for a configurable period of time and is then deleted completely from the community.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Moderation_and_Abuse_Configuration" name="Moderation_and_Abuse_Configuration"&gt;&lt;/a&gt;Moderation and Abuse Configuration&lt;/h2&gt;
&lt;p&gt;In addition to the configuration of abuse automation rules, the &lt;strong&gt;Administration &amp;gt; Moderation &amp;gt; Moderation Options&lt;/strong&gt; panel includes additional configuration options that govern the moderation and abuse workflows:&lt;/p&gt;
&lt;p&gt;&lt;img alt=" " src="/resized-image/__size/1040x0/__key/communityserver-wikis-components-files/00-00-00-12-80/6038.Screen-Shot-2017_2D00_09_2D00_27-at-4.28.53-PM.png" /&gt;&lt;/p&gt;
&lt;p&gt;Here, you can specify:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Possibly Abusive Threshold&lt;/strong&gt;: The minimum number of abuse reports to be reviewed for potential abuse. Once content receives this minimum number of votes, the reputation scores of the users who flagged the content is considered when determining if the&amp;nbsp;content enters the appeal process.&amp;nbsp;Note that if a member who can moderate the content flags content as abusive, it is immediately identified as abusive and is not subject to this minimum.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Definitely Abusive Threshold&lt;/strong&gt;: The maximum number of abuse reports allowed for a piece of content before it is always flagged as abusive. If any piece of content has this many votes, it is always identified as abusive regardless of the reputation of the reporters and author.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Appeal Time Window&lt;/strong&gt;: The number of days an author has to appeal abusive content before it is expunged (scheduled for deletion).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Moderate Time Window&lt;/strong&gt;: The number of days a moderator has to review moderated content before it is expunged (scheduled for deletion).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expunge Time Window&lt;/strong&gt;: The number of days that expunged content waits before it is deleted.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Exempt Authors from Abuse Automation Minimum Reputation Percentile&lt;/strong&gt;: The minimum top percentile that is exempt from abuse automation processing. If a member has a reputation in the top 5%, for example (based on the screenshot above), their content would not be reviewed by abuse automation. Authors who can review abuse where the content is created as also exempt from abuse automation.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If any changes are made, be sure to click &lt;strong&gt;Save&lt;/strong&gt; to commit the changes.&lt;/p&gt;
&lt;h2&gt;&lt;a id="Reviewing_Content_in_the_Abuse_and_Moderation_Workflow" name="Reviewing_Content_in_the_Abuse_and_Moderation_Workflow"&gt;&lt;/a&gt;Reviewing Content in the Abuse and Moderation Workflow&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;Moderation Queue&lt;/strong&gt; (in &lt;strong&gt;Administration &amp;gt; Moderation&lt;/strong&gt;) can be used to review content that is not yet identified as abusive (but has abuse reports) and content anywhere within the abuse process.&lt;/p&gt;
&lt;p&gt;To view content that has been flagged as abusive but has not yet been determined to be reasonably abusive, review the &lt;strong&gt;Possibly Abusive&lt;/strong&gt; tab. Here, you can proactively ignore or deny content before it formally makes it into the abuse workflow.&lt;/p&gt;
&lt;p&gt;To review content that is elsewhere in the abuse workflow, view the &lt;strong&gt;In Process&lt;/strong&gt; tab. Here, you can review content based on its current status within the workflow and filter to a specific group, application, or user. This tab can also be used to correct an error or review abusive content that has not yet been appealed.&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item></channel></rss>