Further explanation of moderation/abuse workflow for users identified as abusive

I've read this post and it's a really good explanation of how abuse and moderation works: community.telligent.com/.../how-does-moderation-and-abuse-work

The bit that I'm not clear on is what happens to user accounts that are identified as abusive and also which plugins trigger users specifically (not content) to be deleted or set as abusive.

I assume 'Abusive User Detection by Abuse Frequency' is one plugin that causes users accounts to be identified as abusive but are there any others?

As I understand it if an account is identified as abusive it enters the abuse workflow and shows under: Admin > Moderation > Moderation Queue > In Process tab.
The user will be listed here with an appeal time frame. If no appeal is made within the time frame then I believe the account and content is deleted.

However are there any other abuse automation / abuse reporting consequences that case cause users to be set to disapproved or banned status instead of deleted? or are they just flagged as abusive and then deleted? 

I ask as we have a number of users in our community that are set as disapproved and we don't think this has been done by any of our moderators.

Thanks 

Parents
No Data
Reply
  • Abuse automation will only ever flag the account as abusive and then delete the account and its content if verified as abusive (if the appeal is rejected or the appeal is not accepted or no appeal is provided).

    The user status can be adjusted by administrators, but is not part of the automatic abuse workflow. 

Children