Fake positives in AI detection tools, and the fear of being human.

The browser is worse about those little squiggly lines than the document is. And it doesn't have an, "I know, I don't care, now go away." button. How did you get it to stop popping that up? Or worse yet, auto correcting.
If you go into your browser preferences you should have an option for that, though the exact path to it will vary with the browser.
 
I tried some other sources out of curiosity in ZeroGPT. A few pages of:
Ready Player One by Earnest Cline: 15% AI generated.
1984 by George Orwell: 88.99% AI generated.

Of course there was no AI when these were written.
Famous texts are likely to return unusually high "AI generated" scores because they show up (often repeatedly) in the corpus used to train large language models. When you train your LLM on Orwell, the output of that LLM will have resemblances to Orwell and therefore Orwell will have resemblances to your LLM.

So this doesn't give us an accurate idea of how texts that never appeared in the training corpus are likely to score.
 
I hope you don’t mind. I ran your story, Chapter 1 “A kind and cruel dominatrix,” into ZeroGPT and got a 16.3% AI GPT score. This is much higher than I get for anything I’ve written, which makes me less concerned. You’re published, so I’m sure I’m fine. I’m just going to not worry about it.
Yeah... I'm glad it calmed your nerves, but please don't give feed AI anymore of my stories. I highly doubt it will effect anything, but the idea of helping those programs using my writing to further improve their capabilities to emulate actual creative writing makes me uncomfortable.
 
Famous texts are likely to return unusually high "AI generated" scores because they show up (often repeatedly) in the corpus used to train large language models. When you train your LLM on Orwell, the output of that LLM will have resemblances to Orwell and therefore Orwell will have resemblances to your LLM.

So this doesn't give us an accurate idea of how texts that never appeared in the training corpus are likely to score.
I've run some of my own stories from 2006-09 through AI detectors. I've gotten scores ranging from 0% to 90+% AI -- for stories written before AI writing tools.
 
On the one hand, I *think* I know, and that's a big step from confirmation. I think @StillStunned was talking about knowing in the sense of definitive facts, hard knowledge, and I don't have that.

On the other hand, at best I have a fuzzy outline of how it works and not any of the specifics.

On the other other hand, I know enough to help some people get their rejected works approved afterwards.
 
The OP (@Matt4224 ) is free to feed their own prose to whatever AI tool they so choose, but feeding others' writings without explicit permission is copyright violation as well as against LitE's TOU.
Sorry. I didn’t realize this. It won’t happen again.
 
It's really well known (apparently only outside of LitE) that AI detectors are  wildly unreliable. Even to the point where they've thrown things like the US constitution into them and they've returned as "AI writing".

However, after reading this thread if ops story posts and its not astleast AI assisted—I will eat my own hat.

@AwkwardMD already listed  numerous red flags from this poster 😂 for me it's the pre-emptive post warning us their pending story is not AI.
 
It would be fascinating to find out what the terms of service are of whatever AI detection tool Literotica uses.

I have trouble imagining that the tool doesn’t turn around and take the text given to it by Lit for its own AI training.
 
It would be fascinating to find out what the terms of service are of whatever AI detection tool Literotica uses.

I have trouble imagining that the tool doesn’t turn around and take the text given to it by Lit for its own AI training.
Honestly. I don't think they use one tbh.

They are very clear in their TOS that they do not want people feeding stories from LitE into AI models, and I would like to trust their integrity far enough to say I don't believe they would then turn around and feed all submissions into a model.

So it kind of leaves two options;

1. They are using a 'homebrew' detector (I sincerely hope this isn't the case as I would not trust this to be effective at all)

2. They are not using software to determine if a work is AI generated or assisted at all. (I think this is the most likely option)

Laurel vets, what, hundreds, maybe thousands of submissions per week?

She would 100% get a feel for what is AI gen and what is OC based on that. When everything starts sounding same-y and as much as people want to deny it, there are definite hallmarks of AI writing, that maybe are innocuous on their own but when piled up with each other it makes it pretty obvious when a work is AI. No doubt, this method gets it wrong occasionally but its probably the most accurate option.
 
Last edited:
Honestly. I don't think they use one
As many submissions as the site gets (I don't know the number, but I've heard estimates around 200-250 per day), they almost have to be using some kind of AI detection system, whether commercial or homegrown. No matter how diligent Laurel is, it's impossible for any human to read that many stories per day.

@AwkwardMD and @StillStunned claim to have some knowledge of how the system works, which backs up the idea that they are, in fact, using one. AwkwardMD says she thinks it's a homegrown system, not a commercial one like Originality, Sapling, or Copyleaks, though I've heard those mentioned on sites like Reddit as being the system used here.

Again, I have no personal knowledge about the system, but with the volume of submissions there pretty much has to be one.
 
As many submissions as the site gets (I don't know the number, but I've heard estimates around 200-250 per day), they almost have to be using some kind of AI detection system, whether commercial or homegrown. No matter how diligent Laurel is, it's impossible for any human to read that many stories per day.

@AwkwardMD and @StillStunned claim to have some knowledge of how the system works, which backs up the idea that they are, in fact, using one. AwkwardMD says she thinks it's a homegrown system, not a commercial one like Originality, Sapling, or Copyleaks, though I've heard those mentioned on sites like Reddit as being the system used here.

Again, I have no personal knowledge about the system, but with the volume of submissions there pretty much has to be one.
A scary thought. I think collectively most creatives don't trust commercial detectors to get it right, I would have no faith in a lone developer to create something more reliable than the trash that's freely available.

If this is true, it would explain why so many people come to the forums to say they've been incorrectly rejected for ai use.
 
How I would check for AI writing:
1) Open the file with Word or a similar product that uses whatever bot is built in these days.
2) If the bot thinks you have given it a fine piece of writing, it's probably unsalvageable and/or not suitable for posting here.
 
How I would check for AI writing:
1) Open the file with Word or a similar product that uses whatever bot is built in these days.
2) If the bot thinks you have given it a fine piece of writing, it's probably unsalvageable and/or not suitable for posting here.
Yeah in most cases you might be right but to play devil's avocado 🥑 — before the advent of chatgpt and other llm's that are capable of generating large bodies of text I (and probably a lot of people) have ran every essay or story I've ever written through a program like Microsoft Word to check my spelling and grammar. This hardly constitutes the work now being AI generated.

Im specifically not referring to things like grammarly that are capable of rearranging paragraphs and forming sentences, although a different issue that's still no where near as egregious as having a LLM generate a story that a human might slightly tinker with before posting.

Edit to add: case in point, this reply contains plenty of grammatical errors Word could fix for me and it wouldn't make this reply an ai response. (Im just not that good and my stories deserve better 😂)
 
I just ran my rejected story thru AI Detector (zeroGPT). It returned 11% likely generated by AI. I don't know what to think. I wrote everything myself, and yet this tool suggests otherwise (apparently that's good enough for this website?). It highlights all the suspect sentences. They are in random places, usually in the middle of paragraphs. Some of them are throw-away sentences I could eliminate with no effect on the story. So what do I do? Put every story through AI Detector, and re-write anything that gets a hit? How is that any way to work? We have to run scared because a tool randomly thinks we're using AI? Who trained the tool, and how did they validate their results?

This is BS!
 
Honestly. I don't think they use one tbh.

They are very clear in their TOS that they do not want people feeding stories from LitE into AI models, and I trust their integrity far enough to say I don't believe they would then turn around and feed all submissions into a model….
It would be wonderful if you’re right.

I will say that part of my job entails being one of a group that reviews my employer’s software purchases for suitability. And the ones that market themselves as having AI features allllll swear up and down and say they won’t use our data for training. Do we believe them? It’s more a case we tell them “you have to say this or we won’t buy your product,” and therefore, they all say “We won’t use your data for training“.

I don’t have any facts on what lit uses. We know the overall review process (even before AI became a thing) is a hybrid manual review with one gatekeeper and some computer assistance. We know story approval is a bottleneck, so I suppose it’s plausible they don’t use an AI detector. Nonetheless I choose to believe they use one. Not some all encompassing script (they wouldn’t know how. ;-), but I still choose to believe one is being used.
 
Not some all encompassing script (they wouldn’t know how. ;-), but I still choose to believe one is being used.
I avoid this part of the conversation because I don't really want to throw shade at anyone. But I think looking at the functionality of the website is enough to tell us they haven't developed their own functional and reliable AI detecting algorithm or software... 🤷‍♀️
 
Let's reintroduce some reason into all this discussion.

Judging by everything I've seen here, it's highly unlikely that Lit has its own homegrown algorithm for AI detection. An effective algorithm of that type would take an impressive amount of cutting-edge programming skill and knowledge, as well as resources and a lot of time for development and testing. I'm pretty sure that's not the case here.

On the other hand, the idea that Laurel actually reads all of the submissions and judges them based on her impression is even more absurd. I hope I don't need to explain why, it's plain common sense.

They probably use some commercial tool for detection. It's the most sensible explanation. It's possible that Laurel glances at the flagged submissions as well, depending on the time she has. This is, of course, all guesswork based on reason and Literotica's modus operandi so far.
 
Let's reintroduce some reason into all this discussion.

Judging by everything I've seen here, it's highly unlikely that Lit has its own homegrown algorithm for AI detection. An effective algorithm of that type would take an impressive amount of cutting-edge programming skill and knowledge, as well as resources and a lot of time for development and testing. I'm pretty sure that's not the case here.

On the other hand, the idea that Laurel actually reads all of the submissions and judges them based on her impression is even more absurd. I hope I don't need to explain why, it's plain common sense.

They probably use some commercial tool for detection. It's the most sensible explanation. It's possible that Laurel glances at the flagged submissions as well, depending on the time she has. This is, of course, all guesswork based on reason and Literotica's modus operandi so far.
This is essentially what I meant to say, except you managed to be a lot more polite about than myself :ROFLMAO: Kudos!
 
The OP (@Matt4224 ) is free to feed their own prose to whatever AI tool they so choose, but feeding others' writings without explicit permission is copyright violation as well as against LitE's TOU.
I wonder; Are we okay with Laurel (or other mods) feeding our prose into AI detectors during the vetting process?
 
Back
Top