<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Moderation/Abuse Management</title><link>https://community.telligent.com/community/11/w/developer-training/65106/moderation-abuse-management</link><description /><dc:language>en-US</dc:language><generator>14.0.0.586 14</generator><item><title>Moderation/Abuse Management</title><link>https://community.telligent.com/community/11/w/developer-training/65106/moderation-abuse-management</link><pubDate>Tue, 04 Aug 2020 21:26:48 GMT</pubDate><guid isPermaLink="false">637b2f6e-05cd-4fee-b1f6-f8081f3854e7</guid><dc:creator>Former Member</dc:creator><comments>https://community.telligent.com/community/11/w/developer-training/65106/moderation-abuse-management#comments</comments><description>Current Revision posted to Developer Training by Former Member on 08/04/2020 21:26:48&lt;br /&gt;
&lt;p&gt;The abuse plugin automates abuse detection to prevent SPAM and other abuse from entering the platform. Individual abuse detectors handle events applicable to their abuse detection logic and notify the abuse service when abuse is detected.&lt;/p&gt;
&lt;p&gt;The [[api-documentation:IAbuseCheckingContentType Plugin Type|IAbuseCheckingContentType]] interface defines the methods necessary to support the &lt;a href="/documentation/w/telligent-community-90/51653.abuse-service" rel="noopener noreferrer" target="_blank"&gt;Abuse Service&lt;/a&gt;. Content types implementing this service can be marked as abusive, hidden if enough marks are added, and moderated using the abuse UI.&lt;/p&gt;
&lt;p&gt;[toc]&lt;/p&gt;
&lt;h3&gt;&lt;a id="Why_Should_I_Allow_My_Content_to_be_Marked_as_Abusive" name="Why_Should_I_Allow_My_Content_to_be_Marked_as_Abusive"&gt;&lt;/a&gt;Why Should I Allow My Content to be Marked as Abusive?&lt;/h3&gt;
&lt;p&gt;To help promote clean and honest community members. Those who are responsible for questionable content are emailed that their content is being considered abusive and can appeal the finding in the email.&lt;/p&gt;
&lt;h3&gt;&lt;a id="Creating_an_Abuse_Plugin" name="Creating_an_Abuse_Plugin"&gt;&lt;/a&gt;Creating an Abuse Plugin&lt;/h3&gt;
&lt;p&gt;In this sample we build upon the &lt;a href="/training/w/developer90/52446.creating-a-custom-application-content" rel="noopener noreferrer" target="_blank"&gt;Application/Content&lt;/a&gt; sample. It is important to note that one or more Core Services can be implemented in the same [[api-documentation:IContentType Plugin Type|IContentType]] class.&lt;/p&gt;
&lt;p&gt;We took the [[api-documentation:IContentType Plugin Type|IContentType]] implementation and further extended the [[api-documentation:IAbuseCheckingContentType Plugin Type|IAbuseCheckingContentType]] interface.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;using System;
using System.Collections.Generic;
using Telligent.Evolution.Extensibility.Content.Version1;
using Telligent.Evolution.Extensions.Lists.Data;
using IContent = Telligent.Evolution.Extensibility.Content.Version1.IContent;

namespace Telligent.Evolution.Extensions.Lists
{
    public interface ILinkItemContentType : IContentType, IAbuseCheckingContentType
    {
        IContentStateChanges _contentState = null;
        
        #region IPlugin
        
        //...
        
        #endregion
        
        #region IContentType

        //...

        #endregion
        
        #region IAbuseCheckingContentType
        
        //...
        
        #endregion
    }
}    &lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;When a piece of content is first flagged as abusive the &lt;code&gt;ContentSuspectedAbusive&lt;/code&gt; method is called. Here the intention is to temporarily disable the content. So we set the &lt;code&gt;IsEnabled&lt;/code&gt; property of the &lt;code&gt;LinkItem&lt;/code&gt; to false;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentSuspectedAbusive(Guid abuseId, Guid contentId)
{
    var content = LinksData.GetLink(contentId);
    if (content == null) return;

    content.IsEnabled = false;
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;After the user is notified that their content was flagged they are allowed to appeal the action. If a moderator accepts the repeal they can approve the content and reinstate it back into the platform. This is when the &lt;code&gt;ContentFoundNotAbusive&lt;/code&gt; method is executed and the &lt;code&gt;IsEnabled&lt;/code&gt; flag can be set back to true;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentFoundNotAbusive(Guid abuseId, Guid contentId)
{
    var content = LinksData.GetLink(contentId);
    if (content == null) return;

    content.IsEnabled = true;
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;On the other hand if the content is deemed inappropriate for the community. Then the &lt;code&gt;ContentConfirmedAbusive&lt;/code&gt; method is called and the content should be permanently removed or deleted.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentConfirmedAbusive(Guid abuseId, Guid contentId)
{
    LinksData.DeleteLink(contentId);
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;In most cases when the content is marked as abusive you want to trim the result sets of the abusive content. In order for the platform to retrieve the disabled content a &lt;code&gt;GetHiddenContent&lt;/code&gt; method is required. Here you want to make sure those mechanisms are disabled and the content can be returned.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public IContent GetHiddenContent(Guid contentId)
{
    return LinksData.GetLink(contentId);
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;Here is the full sample.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://community.telligent.com/cfs-file/__key/communityserver-wikis-components-files/00-00-00-12-83/3683_2E00_LinkItemContentType_2E00_cs"&gt;community.telligent.com/.../3683_2E00_LinkItemContentType_2E00_cs&lt;/a&gt;&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;

&lt;div style="font-size: 90%;"&gt;Tags: abuse, Moderation, spam&lt;/div&gt;
</description></item><item><title>Moderation/Abuse Management</title><link>https://community.telligent.com/community/11/w/developer-training/65106/moderation-abuse-management/revision/2</link><pubDate>Tue, 04 Aug 2020 21:26:28 GMT</pubDate><guid isPermaLink="false">637b2f6e-05cd-4fee-b1f6-f8081f3854e7</guid><dc:creator>Former Member</dc:creator><comments>https://community.telligent.com/community/11/w/developer-training/65106/moderation-abuse-management#comments</comments><description>Revision 2 posted to Developer Training by Former Member on 08/04/2020 21:26:28&lt;br /&gt;
&lt;p&gt;The abuse plugin automates abuse detection to prevent SPAM and other abuse from entering the platform. Individual abuse detectors handle events applicable to their abuse detection logic and notify the abuse service when abuse is detected.&lt;/p&gt;
&lt;p&gt;The [[api-documentation:IAbuseCheckingContentType Plugin Type|IAbuseCheckingContentType]] interface defines the methods necessary to support the &lt;a href="/documentation/w/telligent-community-90/51653.abuse-service" rel="noopener noreferrer" target="_blank"&gt;Abuse Service&lt;/a&gt;. Content types implementing this service can be marked as abusive, hidden if enough marks are added, and moderated using the abuse UI.&lt;/p&gt;
&lt;p&gt;[toc]&lt;/p&gt;
&lt;h3&gt;&lt;a id="Why_Should_I_Allow_My_Content_to_be_Marked_as_Abusive" name="Why_Should_I_Allow_My_Content_to_be_Marked_as_Abusive"&gt;&lt;/a&gt;Why Should I Allow My Content to be Marked as Abusive?&lt;/h3&gt;
&lt;p&gt;To help promote clean and honest community members. Those who are responsible for questionable content are emailed that their content is being considered abusive and can appeal the finding in the email.&lt;/p&gt;
&lt;h3&gt;&lt;a id="Creating_an_Abuse_Plugin" name="Creating_an_Abuse_Plugin"&gt;&lt;/a&gt;Creating an Abuse Plugin&lt;/h3&gt;
&lt;p&gt;In this sample we build upon the &lt;a href="/training/w/developer90/52446.creating-a-custom-application-content" rel="noopener noreferrer" target="_blank"&gt;Application/Content&lt;/a&gt; sample. It is important to note that one or more Core Services can be implemented in the same [[api-documentation:IContentType Plugin Type|IContentType]] class.&lt;/p&gt;
&lt;p&gt;We took the [[api-documentation:IContentType Plugin Type|IContentType]] implementation and further extended the [[api-documentation:IAbuseCheckingContentType Plugin Type|IAbuseCheckingContentType]] interface.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;using System;
using System.Collections.Generic;
using Telligent.Evolution.Extensibility.Content.Version1;
using Telligent.Evolution.Extensions.Lists.Data;
using IContent = Telligent.Evolution.Extensibility.Content.Version1.IContent;

namespace Telligent.Evolution.Extensions.Lists
{
    public interface ILinkItemContentType : IContentType, IAbuseCheckingContentType
    {
        IContentStateChanges _contentState = null;
        
        #region IPlugin
        
        //...
        
        #endregion
        
        #region IContentType

        //...

        #endregion
        
        #region IAbuseCheckingContentType
        
        //...
        
        #endregion
    }
}    &lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;When a piece of content is first flagged as abusive the &lt;code&gt;ContentSuspectedAbusive&lt;/code&gt; method is called. Here the intention is to temporarily disable the content. So we set the &lt;code&gt;IsEnabled&lt;/code&gt; property of the &lt;code&gt;LinkItem&lt;/code&gt; to false;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentSuspectedAbusive(Guid abuseId, Guid contentId)
{
    var content = LinksData.GetLink(contentId);
    if (content == null) return;

    content.IsEnabled = false;
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;After the user is notified that their content was flagged they are allowed to appeal the action. If a moderator accepts the repeal they can approve the content and reinstate it back into the platform. This is when the &lt;code&gt;ContentFoundNotAbusive&lt;/code&gt; method is executed and the &lt;code&gt;IsEnabled&lt;/code&gt; flag can be set back to true;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentFoundNotAbusive(Guid abuseId, Guid contentId)
{
    var content = LinksData.GetLink(contentId);
    if (content == null) return;

    content.IsEnabled = true;
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;On the other hand if the content is deemed inappropriate for the community. Then the &lt;code&gt;ContentConfirmedAbusive&lt;/code&gt; method is called and the content should be permanently removed or deleted.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentConfirmedAbusive(Guid abuseId, Guid contentId)
{
    LinksData.DeleteLink(contentId);
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;In most cases when the content is marked as abusive you want to trim the result sets of the abusive content. In order for the platform to retrieve the disabled content a &lt;code&gt;GetHiddenContent&lt;/code&gt; method is required. Here you want to make sure those mechanisms are disabled and the content can be returned.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public IContent GetHiddenContent(Guid contentId)
{
    return LinksData.GetLink(contentId);
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;Here is the full sample.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://community.telligent.com/cfs-file/__key/communityserver-wikis-components-files/00-00-00-12-83/3683_2E00_LinkItemContentType_2E00_cs"&gt;community.telligent.com/.../3683_2E00_LinkItemContentType_2E00_cs&lt;/a&gt;&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;

&lt;div style="font-size: 90%;"&gt;Tags: abuse, Moderation&lt;/div&gt;
</description></item><item><title>Moderation/Abuse Management</title><link>https://community.telligent.com/community/11/w/developer-training/65106/moderation-abuse-management/revision/1</link><pubDate>Thu, 13 Jun 2019 19:28:35 GMT</pubDate><guid isPermaLink="false">637b2f6e-05cd-4fee-b1f6-f8081f3854e7</guid><dc:creator>Ben Tiedt</dc:creator><comments>https://community.telligent.com/community/11/w/developer-training/65106/moderation-abuse-management#comments</comments><description>Revision 1 posted to Developer Training by Ben Tiedt on 06/13/2019 19:28:35&lt;br /&gt;
&lt;p&gt;The abuse plugin automates abuse detection to prevent SPAM and other abuse from entering the platform. Individual abuse detectors handle events applicable to their abuse detection logic and notify the abuse service when abuse is detected.&lt;/p&gt;
&lt;p&gt;The [[api-documentation:IAbuseCheckingContentType Plugin Type|IAbuseCheckingContentType]] interface defines the methods necessary to support the &lt;a href="/documentation/w/telligent-community-90/51653.abuse-service" target="_blank"&gt;Abuse Service&lt;/a&gt;. Content types implementing this service can be marked as abusive, hidden if enough marks are added, and moderated using the abuse UI.&lt;/p&gt;
&lt;p&gt;[toc]&lt;/p&gt;
&lt;h3&gt;&lt;a id="Why_Should_I_Allow_My_Content_to_be_Marked_as_Abusive" name="Why_Should_I_Allow_My_Content_to_be_Marked_as_Abusive"&gt;&lt;/a&gt;Why Should I Allow My Content to be Marked as Abusive?&lt;/h3&gt;
&lt;p&gt;To help promote clean and honest community members. Those who are responsible for questionable content are emailed that their content is being considered abusive and can appeal the finding in the email.&lt;/p&gt;
&lt;h3&gt;&lt;a id="Creating_an_Abuse_Plugin" name="Creating_an_Abuse_Plugin"&gt;&lt;/a&gt;Creating an Abuse Plugin&lt;/h3&gt;
&lt;p&gt;In this sample we build upon the &lt;a href="/training/w/developer90/52446.creating-a-custom-application-content" target="_blank"&gt;Application/Content&lt;/a&gt; sample. It is important to note that one or more Core Services can be implemented in the same [[api-documentation:IContentType Plugin Type|IContentType]] class.&lt;/p&gt;
&lt;p&gt;We took the [[api-documentation:IContentType Plugin Type|IContentType]] implementation and further extended the [[api-documentation:IAbuseCheckingContentType Plugin Type|IAbuseCheckingContentType]] interface.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;using System;
using System.Collections.Generic;
using Telligent.Evolution.Extensibility.Content.Version1;
using Telligent.Evolution.Extensions.Lists.Data;
using IContent = Telligent.Evolution.Extensibility.Content.Version1.IContent;

namespace Telligent.Evolution.Extensions.Lists
{
    public interface ILinkItemContentType : IContentType, IAbuseCheckingContentType
    {
        IContentStateChanges _contentState = null;
        
        #region IPlugin
        
        //...
        
        #endregion
        
        #region IContentType

        //...

        #endregion
        
        #region IAbuseCheckingContentType
        
        //...
        
        #endregion
    }
}    &lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;When a piece of content is first flagged as abusive the &lt;code&gt;ContentSuspectedAbusive&lt;/code&gt; method is called. Here the intention is to temporarily disable the content. So we set the &lt;code&gt;IsEnabled&lt;/code&gt; property of the &lt;code&gt;LinkItem&lt;/code&gt; to false;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentSuspectedAbusive(Guid abuseId, Guid contentId)
{
    var content = LinksData.GetLink(contentId);
    if (content == null) return;

    content.IsEnabled = false;
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;After the user is notified that their content was flagged they are allowed to appeal the action. If a moderator accepts the repeal they can approve the content and reinstate it back into the platform. This is when the &lt;code&gt;ContentFoundNotAbusive&lt;/code&gt; method is executed and the &lt;code&gt;IsEnabled&lt;/code&gt; flag can be set back to true;&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentFoundNotAbusive(Guid abuseId, Guid contentId)
{
    var content = LinksData.GetLink(contentId);
    if (content == null) return;

    content.IsEnabled = true;
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;On the other hand if the content is deemed inappropriate for the community. Then the &lt;code&gt;ContentConfirmedAbusive&lt;/code&gt; method is called and the content should be permanently removed or deleted.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public void ContentConfirmedAbusive(Guid abuseId, Guid contentId)
{
    LinksData.DeleteLink(contentId);
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;In most cases when the content is marked as abusive you want to trim the result sets of the abusive content. In order for the platform to retrieve the disabled content a &lt;code&gt;GetHiddenContent&lt;/code&gt; method is required. Here you want to make sure those mechanisms are disabled and the content can be returned.&lt;/p&gt;
&lt;p&gt;&lt;pre class="ui-code" data-mode="csharp"&gt;public IContent GetHiddenContent(Guid contentId)
{
    return LinksData.GetLink(contentId);
}&lt;/pre&gt;&lt;/p&gt;
&lt;p&gt;Here is the full sample.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://community.telligent.com/cfs-file/__key/communityserver-wikis-components-files/00-00-00-12-83/3683_2E00_LinkItemContentType_2E00_cs"&gt;community.telligent.com/.../3683_2E00_LinkItemContentType_2E00_cs&lt;/a&gt;&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item></channel></rss>