"AI" Rejection

The only person associated with Lit who can answer the question is silent on the issue.
You're very welcome! It would be great if the thread served at least a little of its original purpose and provided some answers for those in this situation. And I absolutely understand how you feel. This has been a nightmare.
 
After 2 weeks of my story pending, I just had my story rejected with the following message:

"Literotica is a storytelling community centered on the sharing of human adult fantasies. While we do not have a policy against using tools to help with the writing process (i.e. spellcheck, grammar suggestions, etc.), we do ask that all work published on the site at this time be created primarily by a human. If you are using a grammar check program sparingly (as a spellcheck, to fix punctuation, and/or occasionally as a thesaurus), that is fine. If you are allowing a grammar check program to “rewrite” your words, then you are using AI generated text."

I often write my stories in a combination of the notes app on my phone while on the go, and then when I am home in grammarly. I basically use grammarly to check if I mixed or added commas and the occasional coma splice. I reject basically all changes for words they suggest because the suggestions don't sound like humans.

At this point though, I have no idea what to do. I have already written and edited the second part of my story and I had planned on submitting it this morning.

Any suggestions would be helpful!
 
If you haven't let some form of AI do most of the creative process, simply send it back with a note in the notes column saying you didn't use AI.

As I've noted before, until/unless the site can refine the scrutiny to be reasonably sure that AI has been overused, I think they should be giving the author the benefit of the doubt.
 
AMEN!
If you haven't let some form of AI do most of the creative process, simply send it back with a note in the notes column saying you didn't use AI.

As I've noted before, until/unless the site can refine the scrutiny to be reasonably sure that AI has been overused, I think they should be giving the author the benefit of the doubt.
 
I'm curious how these AI detectors work. Are they inclusive or exclusive? That is, will they be triggered by the presence of seemingly AI-generated text, or by the absence of seemingly human-generated text? If I created a story with an AI, but then sprinkled idiosyncratic phrases throughout, would it likely get through?
 
They react to oft-used short sentences, "I doubt that," would receive a mildly AI-generated yellow highlight. Long, compacted sentences with excessive filler words would also trigger a response, "Their days were filled with boundless energy and shared adventures, as Susan eagerly embraced the responsibilities of puppy parenthood." This sentence would receive a RED highlight, indicating a strong possibility of being AI-generated.
I'm curious how these AI detectors work. Are they inclusive or exclusive? That is, will they be triggered by the presence of seemingly AI-generated text, or by the absence of seemingly human-generated text? If I created a story with an AI, but then sprinkled idiosyncratic phrases throughout, would it likely get through?
 
They react to oft-used short sentences, "I doubt that," would receive a mildly AI-generated yellow highlight. Long, compacted sentences with excessive filler words would also trigger a response, "Their days were filled with boundless energy and shared adventures, as Susan eagerly embraced the responsibilities of puppy parenthood." This sentence would receive a RED highlight, indicating a strong possibility of being AI-generated.

Very interesting. Thanks.

It's interesting to me how many people in this forum have significant technological experience and knowledge. I wonder if there's a natural science geek-erotica aficionado connection.

I come from a strong liberal arts/relatively techno-impoverished background, so I'm always learning things here, and not just about smut.
 
Amazingly, this isn't so bad (ChatGPT) in AI department but still says, Hey, Bitch, you didn't write this.
Screenshot 2023-12-15 101349.png
Add to that, it flunks for structure by both Grammarly and ProWritingAid
Screenshot 2023-12-15 101205.png
 
I have street smarts and self-taught computer adequacy. Note, I'm an expert at nothing but a dabbler in many things.
Very interesting. Thanks.

It's interesting to me how many people in this forum have significant technological experience and knowledge. I wonder if there's a natural science geek-erotica aficionado connection.

I come from a strong liberal arts/relatively techno-impoverished background, so I'm always learning things here, and not just about smut.
 
Amazingly, this isn't so bad (ChatGPT) in AI department but still says, Hey, Bitch, you didn't write this.
View attachment 2296888
Add to that, it flunks for structure by both Grammarly and ProWritingAid
View attachment 2296889

My AI-proof translation:


Susan looked up from the park bench at her golden retriever puppy, Max. "Big mistake," she thought, weary already from its boundless energy. It was only noon, and already they were on their second walk of the day. Susan had no choice but to walk the damn creature, or it would chew up all her furniture. Max jumped on her leg, wagging his tail. "Why don't you go play with that Rottweiler over there?" Susan asked. "It looks hungry." Susan groaned, but she felt a glimmer of hope. The local animal shelter wasn't too far away, and they didn't ask questions.
 
The issue isn't how AI writes (if we're not using AI); it's how we write.
 
Last edited:
I highly doubt a detector can distinguish between the two.
It depends what you're using it for. The Bard fragment would be great for a Corporate Guide to Erotica In the Workplace, if you have plans for to submit it to Humour & Satire.
 
There seems to be a consistent theme of "I use these 2-3 apps to check for spelling and grammar, but don't understand why my work gets flagged" in these discussions.

Has anyone here been flagged for just using spell-check in a program like Word?

I get regularly beat up for typos, name mix ups, etc but for me it is the cost of the creation process where I am way more focused on the skyscraper I'm trying to build than how perfectly every brick is formed/placed. AI is trained on each brick and how it is aligned with all the other bricks around it, even if the finished building isn't very good when they are all assembled.

Whether we realize it or not, we can't crank out 5,000-10,000 words of thoughtful text without finding/revealing our own voice. It's distinct enough that any combination of large writing samples are likely to reveal us, even if they are submitted anonymously. It's already happening. Your natural voice, even mixed in between blocks of AI generated text will read differently if you look closely enough. Which is all a computer does. Looks very, very closely in a very short period of time, even with large volumes of text.

AI is not guessing. It's assigning probability based on patterns that can be methodically broken down into how AI processes/translates written ideas. If you feed it a block of text and it lines up exactly against it's own internal logic, it recognizes it's own voice. It's not magic, or alchemy. Same way it recognizes music samples, photographs, etc. It holds the two things up and looks for how closely they match at a level of detail that is very difficult, or incredibly time-consuming to do ourselves. It already has an internal reference to match against, but it's too complex for us to see/measure/anticipate.

If you think the detection is faulty or too simple in it's analysis, or that AI is just too close to your natural voice, read some AI generated text and try to mimic the style in 2-3 paragraphs on a completely different subject, without actually copying/reusing anything from what you read. Feed it in and see how high it scores the probability that it was AI generated. It's not as easy to trip as you might think. Even when you are trying. Try to mimic the writing style of Hemingway or Stephen King, even if you've read everything they've published. You won't be able to do it well enough to fool AI if it has broken down the patterns in their complete works.

Put another way- If the rules say you can only use hand tools and power tools are specifically not allowed, don't use a power tool. Even for finish or prep work. If evidence of one power tool use will get you DQ'd, but 1,000 or 10,000 imperfections with a hand tool will not, embrace the imperfections. They are an inherent part of the process when things are made by hand. Or have another human edit it for you.

Because that is what we are all supposed to be after. Something unique, that cannot be mass-produced. Your own, true, messy, flawed and irreplaceable voice.
 
I'm curious how these AI detectors work. Are they inclusive or exclusive? That is, will they be triggered by the presence of seemingly AI-generated text, or by the absence of seemingly human-generated text? If I created a story with an AI, but then sprinkled idiosyncratic phrases throughout, would it likely get through?

https://arstechnica.com/information...-think-the-us-constitution-was-written-by-ai/

Most of them use a combination of checking for "perplexity" and "burstiness."

In machine learning, perplexity is a measurement of how much a piece of text deviates from what an AI model has learned during its training. As Dr. Margaret Mitchell of AI company Hugging Face told Ars, "Perplexity is a function of 'how surprising is this language based on what I've seen?'"

So the thinking behind measuring perplexity is that when they're writing text, AI models like ChatGPT will naturally reach for what they know best, which comes from their training data. The closer the output is to the training data, the lower the perplexity rating. Humans are much more chaotic writers—or at least that's the theory—but humans can write with low perplexity, too, especially when imitating a formal style used in law or certain types of academic writing. Also, many of the phrases we use are surprisingly common.

Let's say we're guessing the next word in the phrase "I'd like a cup of _____." Most people would fill in the blank with "water," "coffee," or "tea." A language model trained on a lot of English text would do the same because those phrases occur frequently in English writing. The perplexity of any of those three results would be quite low because the prediction is fairly certain.

Now consider a less common completion: "I'd like a cup of spiders." Both humans and a well-trained language model would be quite surprised (or "perplexed") by this sentence, so its perplexity would be high. (As of this writing, the phrase "I'd like a cup of spiders" gives exactly one result in a Google search, compared to 3.75 million results for "I'd like a cup of coffee.")

Another property of text measured by GPTZero is "burstiness," which refers to the phenomenon where certain words or phrases appear in rapid succession or "bursts" within a text. Essentially, burstiness evaluates the variability in sentence length and structure throughout a text.

Human writers often exhibit a dynamic writing style, resulting in text with variable sentence lengths and structures. For instance, we might write a long, complex sentence followed by a short, simple one, or we might use a burst of adjectives in one sentence and none in the next. This variability is a natural outcome of human creativity and spontaneity.

AI-generated text, on the other hand, tends to be more consistent and uniform—at least so far. Language models, which are still in their infancy, generate sentences with more regular lengths and structures. This lack of variability can result in a low burstiness score, indicating that the text may be AI-generated.

However, burstiness isn't a foolproof metric for detecting AI-generated content, either. As with perplexity, there are exceptions. A human writer may write in a highly structured, consistent style, resulting in a low burstiness score. Conversely, an AI model might be trained to emulate a more human-like variability in sentence length and structure, raising its burstiness score. In fact, as AI language models improve, studies show that their writing looks more and more like human writing all the time.

Ultimately, there's no magic formula that can always distinguish human-written text from that composed by a machine. AI writing detectors can make a strong guess, but the margin of error is too large to rely on them for an accurate result.

The bottom line, from the article, is this:

A 2023 study from researchers at the University of Maryland demonstrated empirically that detectors for AI-generated text are not reliable in practical scenarios and that they perform only marginally better than a random classifier. Not only do they return false positives, but detectors and watermarking schemes (that seek to alter word choice in a telltale way) can easily be defeated by "paraphrasing attacks" that modify language model output while retaining its meaning.

"I think they're mostly snake oil," said AI researcher Simon Willison of AI detector products. "Everyone desperately wants them to work—people in education especially—and it's easy to sell a product that everyone wants, especially when it's really hard to prove if it's effective or not."

Additionally, a recent study from Stanford University researchers showed that AI writing detection is biased against non-native English speakers, throwing out high false-positive rates for their human-written work and potentially penalizing them in the global discourse if AI detectors become widely used.

I wrote a bunch on this (and why they mostly suck), my own process, the witch hunt issue, and more over at https://forum.literotica.com/threads/ai-allegations-thread.1599778/page-26#post-97985369 and in later parts of that thread, mostly in the process of disassembling a witchhunter's arguments.

Some other key posts:

My process, which utilizes tools in a way that Laurel has no problems with, and which has never gotten dinged as AI: https://forum.literotica.com/threads/ai-allegations-thread.1599778/page-19#post-97969898

Non-writing places where AI detection/assessment has failed horribly: https://forum.literotica.com/threads/ai-allegations-thread.1599778/page-24#post-97976494

A primer on how ChatGPT works in the first place: https://forum.literotica.com/threads/ai-allegations-thread.1599778/page-24#post-97979855

A quick assessment of the tool the witchhunter was trying to say "proved" that I and others used AI text generators, flagging all but one of the top 10 most read stories on the site (the newest of which was posted in 2009): https://forum.literotica.com/threads/ai-allegations-thread.1599778/page-28#post-97991991

My thoughts on the hypocrisy of "purists" saying we had to conform to their preferred tools: https://forum.literotica.com/threads/ai-allegations-thread.1599778/page-28#post-97992782

And, over in another thread, the way that I did use AI in artwork, how I did, and the things I'm still wrestling with: https://forum.literotica.com/threads/about-that-ai-assist-in-writing.1600310/post-97980469

I'm probably going to turn all of this into an essay at some point. :D Loving AI, here I come!
 
Last edited:
There seems to be a consistent theme of "I use these 2-3 apps to check for spelling and grammar, but don't understand why my work gets flagged" in these discussions.

Has anyone here been flagged for just using spell-check in a program like Word?

I get regularly beat up for typos, name mix ups, etc but for me it is the cost of the creation process where I am way more focused on the skyscraper I'm trying to build than how perfectly every brick is formed/placed. AI is trained on each brick and how it is aligned with all the other bricks around it, even if the finished building isn't very good when they are all assembled.

Whether we realize it or not, we can't crank out 5,000-10,000 words of thoughtful text without finding/revealing our own voice. It's distinct enough that any combination of large writing samples are likely to reveal us, even if they are submitted anonymously. It's already happening. Your natural voice, even mixed in between blocks of AI generated text will read differently if you look closely enough. Which is all a computer does. Looks very, very closely in a very short period of time, even with large volumes of text.

AI is not guessing. It's assigning probability based on patterns that can be methodically broken down into how AI processes/translates written ideas. If you feed it a block of text and it lines up exactly against it's own internal logic, it recognizes it's own voice. It's not magic, or alchemy. Same way it recognizes music samples, photographs, etc. It holds the two things up and looks for how closely they match at a level of detail that is very difficult, or incredibly time-consuming to do ourselves. It already has an internal reference to match against, but it's too complex for us to see/measure/anticipate.

If you think the detection is faulty or too simple in it's analysis, or that AI is just too close to your natural voice, read some AI generated text and try to mimic the style in 2-3 paragraphs on a completely different subject, without actually copying/reusing anything from what you read. Feed it in and see how high it scores the probability that it was AI generated. It's not as easy to trip as you might think. Even when you are trying. Try to mimic the writing style of Hemingway or Stephen King, even if you've read everything they've published. You won't be able to do it well enough to fool AI if it has broken down the patterns in their complete works.

Put another way- If the rules say you can only use hand tools and power tools are specifically not allowed, don't use a power tool. Even for finish or prep work. If evidence of one power tool use will get you DQ'd, but 1,000 or 10,000 imperfections with a hand tool will not, embrace the imperfections. They are an inherent part of the process when things are made by hand. Or have another human edit it for you.

Because that is what we are all supposed to be after. Something unique, that cannot be mass-produced. Your own, true, messy, flawed and irreplaceable voice.
Except the AI detectors are really, really shitty at it. Pretty much everyone in the ML field that isn't trying to sell one has the same advice: "don't use it for anything important, and don't use it as the only tool to evaluate."
 
AI is not guessing. It's assigning probability based on patterns that can be methodically broken down into how AI processes/translates written ideas. If you feed it a block of text and it lines up exactly against it's own internal logic, it recognizes it's own voice. It's not magic, or alchemy.
It's also not reliable.

When the companies that make the AI pull their AI checker from use because it cannot reliable detect output from their own product, it should give you pause on trusting any AI checkers.
 
Additionally, most of the supposed AI "fingerprints" that people talk about and the AI checkers look for apply primarily to AI-generated text, and to a much lesser extent to text that has been AI-edited from a human draft (depending of course on the frequency and amount of editing). So if the checkers are unreliable even in the easiest case, consider how useless they are in the trickier one.
 
Do we throw out all refs, along with he rule book, if some of the calls aren't perfect?

of course not.

I'd only ask folks to keep in context the stakes involved.

If you're writing a power point presentation or sales proposal, no one cares if you use AI to make it easier to complete and more professional.

If you're trying to auction/publish a previously undiscovered Hemingway or Crichton novel, expect that everyone with an interest in its potential value is going to have it dissected to the nth degree. Every. Single. Time.

If you're writing for your own enjoyment, do whatever you want. all good. This is also true if you want to use PEDs as a recreational runner, or cover Taylor Swift tunes in your garage. Once you enter an event with an organizing body or play a live venue, you have to respect the rules. You could only play original songs at CBGB's because Hilly Kristal didn't want to pay ASCAP. I think we can all respect that. Many of us are eternally grateful for the music that got made because of that rule.

If you want someone to publish it, even if its just online, I think its fair that you have to align to their standard. Its their call. If they want to use the fastest, cheapest method out there and it generates a few bad calls along the way, its appropriate to the fact that it costs nothing to submit here. this site is the entity with the most to lose if they miss something that should have been rejected. The stakes are much higher for them. Its not a conspiracy. Its just common sense.

I'm not sure why anyone would think that resubmitting a bunch of times or being more aggressive would be the solution. Especially if you know you ran it through an app at any point in the process. I see a lot of writers link to other places where their readers can find their work. I would just post it there and count on the fact that your readers will find it.

Still interested in hearing if anyone has been rejected without ever running their work through an app before submission.
 
Do we throw out all refs, along with he rule book, if some of the calls aren't perfect?

of course not.

I'd only ask folks to keep in context the stakes involved.

If you're writing a power point presentation or sales proposal, no one cares if you use AI to make it easier to complete and more professional.

If you're trying to auction/publish a previously undiscovered Hemingway or Crichton novel, expect that everyone with an interest in its potential value is going to have it dissected to the nth degree. Every. Single. Time.

If you're writing for your own enjoyment, do whatever you want. all good. This is also true if you want to use PEDs as a recreational runner, or cover Taylor Swift tunes in your garage. Once you enter an event with an organizing body or play a live venue, you have to respect the rules. You could only play original songs at CBGB's because Hilly Kristal didn't want to pay ASCAP. I think we can all respect that. Many of us are eternally grateful for the music that got made because of that rule.

If you want someone to publish it, even if its just online, I think its fair that you have to align to their standard. Its their call. If they want to use the fastest, cheapest method out there and it generates a few bad calls along the way, its appropriate to the fact that it costs nothing to submit here. this site is the entity with the most to lose if they miss something that should have been rejected. The stakes are much higher for them. Its not a conspiracy. Its just common sense.

I'm not sure why anyone would think that resubmitting a bunch of times or being more aggressive would be the solution. Especially if you know you ran it through an app at any point in the process. I see a lot of writers link to other places where their readers can find their work. I would just post it there and count on the fact that your readers will find it.

Still interested in hearing if anyone has been rejected without ever running their work through an app before submission.
People submit a bunch of times, with comments explaining they did not use AI, because they don't want to change the human words they wrote to satisfy some program. If an AI checker flags my work, and I change it until it doesn't, I just let that AI checker rewrite my work. Isn't that what we're trying *not* to do?
 
Do we throw out all refs, along with he rule book, if some of the calls aren't perfect?

of course not.

I'd only ask folks to keep in context the stakes involved.

If you're writing a power point presentation or sales proposal, no one cares if you use AI to make it easier to complete and more professional.

If you're trying to auction/publish a previously undiscovered Hemingway or Crichton novel, expect that everyone with an interest in its potential value is going to have it dissected to the nth degree. Every. Single. Time.

If you're writing for your own enjoyment, do whatever you want. all good. This is also true if you want to use PEDs as a recreational runner, or cover Taylor Swift tunes in your garage. Once you enter an event with an organizing body or play a live venue, you have to respect the rules. You could only play original songs at CBGB's because Hilly Kristal didn't want to pay ASCAP. I think we can all respect that. Many of us are eternally grateful for the music that got made because of that rule.

If you want someone to publish it, even if its just online, I think its fair that you have to align to their standard. Its their call. If they want to use the fastest, cheapest method out there and it generates a few bad calls along the way, its appropriate to the fact that it costs nothing to submit here. this site is the entity with the most to lose if they miss something that should have been rejected. The stakes are much higher for them. Its not a conspiracy. Its just common sense.

I'm not sure why anyone would think that resubmitting a bunch of times or being more aggressive would be the solution. Especially if you know you ran it through an app at any point in the process. I see a lot of writers link to other places where their readers can find their work. I would just post it there and count on the fact that your readers will find it.

Still interested in hearing if anyone has been rejected without ever running their work through an app before submission.
A number of people have claimed to, and that’s the problem. They’ve claimed it, but they can’t prove it. There’s no way to prove it.

Someone (the aforementioned witch-hunter) ran my stories through Sapling’s AI detector, and (according to him) every one of my stories got pinged as “highly likely.” Mind you, I didn’t use more than a spell-checker on anything until June, by which time I had dozens of published stories on the site.

That’s part of the problem: people can’t confront their accuser.
“The AI says you’re using AI.”
“Okay, which AI, so I can know what if doesn’t like?”
“Just make it sound more human.”
“By what criteria?”
“Oh, you’ll figure it out.”

Compounding that, Sapling, and probably others sees amateurish writing as AI content. I put the top 10 most-read stories in the site through it, none of them published after 2009, and all but one of them got flagged as having at least some AI-generated text. Two of them scored 95-100%!

This is an amateur writing site. Yeah, there are contests, but it’s still an amateur site. If your detector is looking for things that are signs of amateur writing (which low burstiness and perplexity often are), it’s going to tag A LOT of stuff.

The people that have gotten dinged so far don’t have a following; they’re just starting our. So telling them, “well, just put a link to the other places you write on your profile” is non-helpful. If I ever start getting pinged? Yeah, I’ll do just that. But for those folks? It’s the “Well how much could a banana cost? $10?” of writing advice.
 
Back
Top