I am seeking to use an automated site crawler (e.g. Screaming Frog, Crawl4AI) on our Community instance on Verint's SaaS to assist with Customer Journey mapping. Unfortunately...and unsurprisingly...this is being blocked by robots.txt.
Is there a specific way that Verint prefers we do this? Is this a ticket to Verint SaaS to allow our tool to crawl? I have not yet found any documentation on this.
I do know that site scraping is possible as one of our downstream channel partners has done it, but seeking some best practice guidance here. Thanks all.