I tested a highly rated AI detection tool, it is worthless

Don't mind me reviving this thread xD

Due to the string of AI rejections, I'm putting together a mass-thread to culminate every story from anyone who has been rejected on AI grounds. I figured if we put them all in one place, we have more concrete evidence that something is wrong with the detection. Feel free to add your personal experiences with AI rejection.

Thread
I appreciate the effort, but article after article keeps coming out saying AI writing detectors are practically snake oil. I listed some links below. Like I said, I tested them myself and they randomly generate false positives in everything I write. Depending on the tense, they will say an original work is 100% AI. The people that make the software are selling something to make money off of panic and hype.

I'm tackling the problem by writing in present tense, which requires spacial awareness (chabots don't have that yet). Present tense causes the lowest false positives so far. AI writing detectors look for patterns, not quality of work. The present tense doesn't have the patterns it looks for. I have switched to writing in this style on my VictoriaCSandalwood account until some kind of word-tracking software comes along that can track my original work as I write it.

Here is a link to an example of my present tense writing if you guys want to use the style too:

https://literotica.com/s/fun-with-victoria-and-jane-pt-01


The following links are articles about why AI detection is practically snake oil:

https://arstechnica.com/information...i-admits-that-ai-writing-detectors-dont-work/

https://arstechnica.com/information...hink-the-us-constitution-was-written-by-ai/2/

https://www.trails.umd.edu/news/detecting-ai-may-be-impossible-thats-a-big-problem-for-teachers

https://www.plagiarismtoday.com/2023/06/01/the-major-obstacle-to-detecting-ai-writing/
 
The rollout of Google's AI program last week made me come back on here and check these topics. If you haven't read about Google Gemini, I highly recommend you look into it. The whole situation is proof of how AI works based on building the program from information fed into it. It was generating responses in such a way, that it was programmed to do very specifically.

I've been saying this all along that many people who lecture us about the rejections, really don't know how AI works. I don't care if someone feeds stuff into ChatGPT and gets it writing them stock sentences. ChatGPT's language model is based entirely around set info that a programmer put into it. The AI stuff is meant to mimic humans based on info put it in from a human!

The Google program showcases a far more realistic situation of the worst of AI and how it can be used, rather than any other doomsday scenarios. In the end, it's real human writers who are going to suffer
 
I had two stories kicked back for AI use on my alt account. I was shocked and furious because I don't use AI to write my stories. I use Grammarly to check for typos and that's all. I don't let it rewrite anything. I paid for a subscription to Originality.ai's content scanner and did some tests. I scanned my story once, and it came back as 13% likely AI written. Completely false because I know for damn sure I didn't use AI to write any part of it. I scanned it again after fixing some line spacing that messed up during the copy/paste, and it reported 45% AI written???? I scanned it a 3rd time, and now it's at 25%??? F*cking scam software much?

Some writers have a high false positive because we use the rules of English and write formally, or the software is just hyped up sh*t. First person writing seems to cause false positives more, and that's one of my favorite ways to write about a sexual experience! Now I'm having a New Year meltdown because writing is a huge part of my life. I've been writing original stories for over ten years, and now software is claiming IT ISN"T MY MINE??? Is this what I have to look forward to for the rest of my life? Competing to prove I'm not a f*cking robot? I lost interest in painting because AI took that over, and now my writing is being steamrolled. I plan to cancel my Grammarly subscription. I wonder if they are feeding our data to these crap detector tools. Anyone not using Grammarly getting false positives? My content before I started using Grammarly has less false positives. Can anything be done about this before i lose my f8cking mind? Anyone else make progress overcoming this?
Maybe make some minor mistakes on purpose? Just be glad it's not a college final exam paper and your prof flunks you because his software said you used AI
 
I had two stories kicked back for AI use on my alt account. I was shocked and furious because I don't use AI to write my stories. I use Grammarly to check for typos and that's all. I don't let it rewrite anything. I paid for a subscription to Originality.ai's content scanner and did some tests. I scanned my story once, and it came back as 13% likely AI written. Completely false because I know for damn sure I didn't use AI to write any part of it. I scanned it again after fixing some line spacing that messed up during the copy/paste, and it reported 45% AI written???? I scanned it a 3rd time, and now it's at 25%??? F*cking scam software much?

Some writers have a high false positive because we use the rules of English and write formally, or the software is just hyped up sh*t. First person writing seems to cause false positives more, and that's one of my favorite ways to write about a sexual experience! Now I'm having a New Year meltdown because writing is a huge part of my life. I've been writing original stories for over ten years, and now software is claiming IT ISN"T MY MINE??? Is this what I have to look forward to for the rest of my life? Competing to prove I'm not a f*cking robot? I lost interest in painting because AI took that over, and now my writing is being steamrolled. I plan to cancel my Grammarly subscription. I wonder if they are feeding our data to these crap detector tools. Anyone not using Grammarly getting false positives? My content before I started using Grammarly has less false positives. Can anything be done about this before i lose my f8cking mind? Anyone else make progress overcoming this?
I've suspected something like this was the case. How the fuck could any program tell whether something was written by AI? Apparently, they're no better than dowsing rods:

https://beincrypto.com/ai-detection-tools-cannot-spot-cheaters/

https://www.metastellar.com/nonfict...ed-why-they-dont-work-and-what-to-do-instead/

https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/

I've also long thought that Grammarly and similar programs like ProWriting Aid were overrated. At best they might catch a few things that the MS Word grammar checker missed, and at worst their style suggestions impose a writing style that's alien to what you want or subverts your intended meaning.

Maybe that's why I have yet to fall afoul of these BS AI detector programs used by the mods.
 
I appreciate the effort, but article after article keeps coming out saying AI writing detectors are practically snake oil. I listed some links below. Like I said, I tested them myself and they randomly generate false positives in everything I write. Depending on the tense, they will say an original work is 100% AI. The people that make the software are selling something to make money off of panic and hype.

I'm tackling the problem by writing in present tense, which requires spacial awareness (chabots don't have that yet). Present tense causes the lowest false positives so far. AI writing detectors look for patterns, not quality of work. The present tense doesn't have the patterns it looks for. I have switched to writing in this style on my VictoriaCSandalwood account until some kind of word-tracking software comes along that can track my original work as I write it.

Here is a link to an example of my present tense writing if you guys want to use the style too:

https://literotica.com/s/fun-with-victoria-and-jane-pt-01


The following links are articles about why AI detection is practically snake oil:

https://arstechnica.com/information...i-admits-that-ai-writing-detectors-dont-work/

https://arstechnica.com/information...hink-the-us-constitution-was-written-by-ai/2/

https://www.trails.umd.edu/news/detecting-ai-may-be-impossible-thats-a-big-problem-for-teachers

https://www.plagiarismtoday.com/2023/06/01/the-major-obstacle-to-detecting-ai-writing/
I just replied to your original post with a bunch of links of my own, but I see you beat me to it. :)
 
Maybe make some minor mistakes on purpose? Just be glad it's not a college final exam paper and your prof flunks you because his software said you used AI
I actually tried that and it still said the orginal writing was AI generated. AI detectors are snake-oil.
 
I know the discussion around AI detection has been beat long past death, but I thought I'd add my two cents. Maybe the info I share can help someone dealing with false flag reports. (Probably not, unless the nature of the detection tools changes drastically.)

The next few paragraphs are gonna get a little technical; I've put a tl;dr at the bottom for those that aren't interested in the details.

How LLMs (large language models) like ChatGPT work is that they build sentences word by word, compiling a list of possible next words based on the previous word(s) (and likely some additional variables kept in memory to maintain contextual cohesion). This Computerphile video does a great job of illustrating the core technique, along with a proposed detection method.

As explained in the above video, in order to detect whether a sentence is LLM-generated, you work off of the list of possible next words, checking the probability of each word in the sentence having been chosen.

The problem with this method is that it requires you to know what LLM (might have) generated the content, as well as the parameters used during generation. For example, if the language model based the next word on 3 preceding words in one output but then was changed to use the entire preceding sentence in another, the proposed detection method might give false readings. So basically, every LLM's architecture and settings would need to be publicly available information. Which is not going to happen.

What modern so-called AI detection tools are likely doing is a bastardized, limited version of the above method, if they're actually doing anything at all. Now, I don't have any insider information on how these algorithms actually work, but my best guess is that they utilize a similar probability-scanning approach in which sequences of words are checked to see how likely it was computer-generated. This relies solely on statistics gathered from large data analysis, unless they have access to Grammarly's and ChatGPT's source code (which they don't).

There is also the possibility of the detection tools being based on a neural network that was fed examples of ChatGPT writing vs. human writing, but such a neural network would be effectively worthless. Though this would actually explain why the tools are so inconsistent.

The moment an algorithm is developed, it will make waves in the scientific and academic communities, who are currently facing major issues trying to deal with ChatGPT. If anyone claims to have a tool that can reliably detect AI-written content, I'd ask to see the corresponding journal paper and even then, I'd be skeptical; there's a lot of sketchy data pushed into academic papers.

tl;dr: There is no current method (that I am aware of; someone please correct me if I'm wrong) for reliable detection of LLM-generated content.

source: I have a background in neural networks and computer science. Though I'm no longer keeping a close eye on the cutting edge, I still have an understanding of the underlying technology.
 
Personally I wouldn't care if a story is generated by AI. If it's hot and sexy, who cares.
 
Back
Top