Reporting Content to Moderation with Reasons

To better respond to the Online Safety Act, I'm seeing social media platforms react by allowing the user to purposefully report content and categorise it for better moderation. Here's an example from Facebook:

What's Verint doing for the Community product to be able to have this level of fidelity in its moderation/reporting to better respond to online harms?