Thanks Lu. Yes, you are right. It should work for the writer but feels like being a slave to the AI algorithm detecting everything, true or not. I will only do my very best but after that having no more of AI mind wrecking stuff. Thinking about it in a calmer mind, I think it could be a good idea (to eliminate any possible doubt) if there is a possibility of a story flagged even a bit etc with AI, a disclaimer or similar "explanation" could be added at start of story to clarify any misperceptions). We don't have control over AI or detectors or how they work, but we do have control over our own work and writing. That's the best anyone can do. Outside of that it's beyond our control what the algorithm thinks. Many if my stores were about writers and quotes from them and I put many quotes and research links in stories, and it's those kind of things that can get tagged as AI, despite I properly credited quotes, etc. My view is "if in doubt, put something at top of story" rather than risk problems.
You see, the problem with using AI detectors AFTER a story has been already published months ago, is that the contents of the story is in the AI system, so any or all combinations of words used in the story will show up as matching or possible AI or plagiarism but it's not, it's because the story is already "viral". Copyleaks is used a lot in colleges etc as students submit work to make sure its their own work, which it should be. But for content writers who already published work online it is less effective, as their own work can show as AI or plagiarism. The AI detectors aren't that smart to distinguish whose work they are scanning and if it's the SAME work as what they already published. That's where the "bots" fail and aren't human.