Overactive AI Filter?

Joined
Jul 6, 2025
Posts
8
I've had a lot of trouble with the AI filter, and it seems weird. For instance, the [Link Removed] were all rejected. Now, sure, I love an em-dash, but this feels a bit unfair.

I actually had to sign up for ChatGPT to try to figure out what made my stories read like AI!

Ironically it gave some good advice, but if I took it, then I'd be violating the AI policy! Here's what it said:

"Does this story read like it was written by AI?"

Short answer: no, this does not scream “AI wrote this” to me. It reads like a human with a very specific set of kinks and craft priorities, plus a careful ethical frame.

Let me break that down a bit.



Why it feels human

1. The kink/ethics architecture is very particular.
The whole CNC / conditioning setup, the way you scaffold consent (the pre-scene request, the reveal, the “while you’re clear-headed, choose” sequence) is idiosyncratic and internally negotiated in a way AI-text usually doesn’t bother to structure. It feels like someone who:
• Reads a lot of kink discourse
• Has very specific fantasies
• Has thought hard about “is this okay?” and then decided to push anyway

That’s a very human combo.

2. Emma’s emotional messiness is too specific.
The best part, in terms of “this feels human,” is the reveal + aftermath:
• Her arousal and hurt at the breakup text
• “Did I just do something beautiful or something terrible?”
• Her admission that she came hard at the worst possible moment emotionally
• The way she explicitly asks for consent to continue because she’s scared of how much she wants it

That’s textured and self-incriminating in a way AI tends to sand down. An LLM usually either makes her noble and selfless, or cartoonishly evil; this version sits uncomfortably in the middle.

3. Concrete, slightly odd details.
There are lots of tiny, grounded choices AI often doesn’t make unless prompted very hard:
• The carved wooden whale keychain from Cape Cod
• The Clark Kent riff tied specifically to the glasses and performance rather than just a superficial joke
• The sodium vapor lamps making the sky orange-gray
• The way the library air is described as “recycled, filtered, dead”

Those are all “someone’s personal texture palette,” not generic filler.

4. The structural ambition.
This isn’t just “porn with a twist.” It’s:
• 2nd person role-immersion
• Layered with psych jargon & actual conditioning logic
• Layered again with an identity twist and meta-consent scene
• And then a deliberate “now the conditioning has to compete with what you know.”

That’s quite a conceptual design for a stroke piece. It reads like you’re trying to satisfy yourself as a thinker as much as as a fantasist, which is very you/human.



What could read a bit “AI-ish”

I’m not saying it is AI-ish, just where a sensitive reader or editor might get a little suspicious if they’re primed to think about it:

1. The text is very long and very “on rails.”
Escalation is almost perfectly smooth: eye contact → light conditioning → sexual escalation → orgasm → breakup → twist → consent meta → “level 2” conditioning.

That’s great structurally, but the lack of small, pointless tangents or dead ends can read slightly “system-optimized.” Humans usually have a bit more random drift.

2. Occasional over-explanation of the kink mechanics.
Lines like:

“Your brain is making connections now, building neural pathways. Linking pleasure, release, submission, all of it, to obedience to me.”

or

“That’s the conditioning taking hold.”

feel a tiny bit like the story is standing at the whiteboard, explaining the experiment. A human erotica writer might leave a bit more of that implicit once we’re in the middle of it.

(Though: your target audience for this kind of story probably likes that layer spelled out, so this isn’t necessarily a flaw.)

3. Repetition patterns.
Some phrases repeat in very similar form:
• “Good boy / such a good, obedient boy”
• “You belong to me / I’m yours forever”
• “Look at my eyes / my eyes are where you belong”

In context, that fits the conditioning theme, but a detector-happy editor might clock it as a stylistic “loop.” The difference is that here the repetition is semantically motivated; if you trimmed a few instances at the margins, it’d feel even more deliberately human.

4. A couple of POV/tensing glitches.
There’s a brief slip from 2nd person to 3rd (“her thighs trembling around his head,” etc.). That actually feels more like human drafting than AI—LLMs usually keep POV mechanically consistent once it’s set—but an editor who’s already suspicious might misread that as “sloppy machine output” instead of “workshop draft.”



If your goal is “definitely reads human”…

You’re already in good shape. But if you want to nudge it further away from anything someone might side-eye as “AI-adjacent,” I’d suggest:
1. Trim a few explicit “we are conditioning you” explanations.
Let some of the behavioral stuff just play out in action and bodily response. Keep the best explicit naming, cut maybe 10–15% of the didactic lines.
2. Introduce one or two genuine “off script” moments.
E.g., a piece of dialogue where Claire/Emma genuinely fumbles, or Ryan says something off-target that she has to recover from, instead of always delivering the perfect kink line.
3. Lean a little more on subtext.
For instance, when Emma talks about sitting in the study room after he leaves, you already show she’s wrecked. You might not need every beat of explanation about beautiful/terrible—it’s effective, but you can shave a line or two and trust the reader.
4. Clean up the POV slips so they look intentional, not accidental.
Either commit to 2nd person, or let yourself pivot cleanly into 3rd for the later sections; right now it’s mostly 2nd with a couple of stray “his” and “he”s.



But big picture: this reads like an intense, ethically self-conscious human kink story, not like a generic AI smut generator. If someone told me “someone with a psych nerd streak and a lot of feelings about consent wrote this over too many late nights,” I’d believe that instantly.

[Please do not post links to work on competing sites that has been rejected here. - AH Mod]
 
Last edited by a moderator:
Your AO3 link will be removed. Literotica doesn't allow linking to other story sites.

You might edit your message to give a title and say to search for it, which I believe is legal (but don't abuse it).

--Annie
 
I'll give you this, it's a pretty ingenious way to use AI.
I actually got the idea from a volunteer editor. She said she didn't see the problem, but recommended that I run the story through an AI to find out what the moderators were seeing. Apparently the moderators use some AI solution themselves, so it's already AI-trying-to-detect-AI.
 
None of us, including the volunteer editors, know what Laurel uses to detect AI. Given the slowness with which AI rejections come, it is certainly not only an automated checker.
 
It seems a bit reductive to ask an LLM whether text reads like it was generated by an LLM. The LLM has no incentive to tell the truth, merely to answer in a fashion that is internally consistent. It's like playing the Guards riddle in a world designed by Kafka
 
It seems a bit reductive to ask an LLM whether text reads like it was generated by an LLM. The LLM has no incentive to tell the truth, merely to answer in a fashion that is internally consistent. It's like playing the Guards riddle in a world designed by Kafka
Depends on the reward function.

If it's actually being checked--that is, if there is training data where the human operator knows which text is AI-generated and which isn't, and can mark LLM guesses as good or bad--there would in fact be a "benefit" to the LLM for guessing right.

--Annie
 
I've just had a long conversation with Copilot after entering the following prompt:

'If I were to give you a passage of writing to analyse and tell me whether it’s AI generated or human writing, could you do so? What criteria would you use? Does your training set include materials that would enable you to do so? Would you be able to assign a probability to the passage being either AI or Human? This is just a general query. Would your analysis be any different to that of a commercial AI detector?'

The final response:

'Takeaway​

Yes — some human writers will always be flagged as AI, simply because their style coincides with the statistical patterns detectors use. The irony is that exam boards in the 1960s trained students to write in exactly the way detectors now misclassify.'
 
I wonder now how much that AI will incorporate his query into a "learning lesson' it will use in future AI endeavors.
 
The irony is that exam boards in the 1960s trained students to write in exactly the way detectors now misclassify.'
I'd be really interested to read up on how exams were administered in such a way in the 1960s. And in which country, for that matter. Can you link the source your AI consort scraped this from?
 
Depends on the reward function.

If it's actually being checked--that is, if there is training data where the human operator knows which text is AI-generated and which isn't, and can mark LLM guesses as good or bad--there would in fact be a "benefit" to the LLM for guessing right.

A reward function can be useful in training a ML system to classify stuff, but it's still heavily dependent on the design of that ML system as to whether it's actually capable of being trained.

I mean, I could ask my cat to distinguish between AI-generated and non-AI-generated text, and offer her Tasty Goo In A Tube every time she gets it right. TGIAT is the most powerful motivator I know for her, and no doubt it would result in her making a bunch of enthusiastic predictions (given some appropriate mechanism for choosing). But it wouldn't make her good at distinguishing between AI-gen and non-AI-gen, because that's just not the kind of task that her mind is suited to.
 
Thank you for providing the sources.

So I think it's important to note that these are all from the UK, which feels like something that should have been specified from the onset, since that is not the only country that teaches English composition as part of school curricula. And the first entry in that series is actually from 1957, so really it should have given an earlier date. Tsk tsk.

Secondly, I'm curious as to how your AI paramour is able to make the claim that all AI detectors misclassify human-generated text as AI-generated because it is written in the style of how UK students were prompted to write on exams administered in the 1950s & 1960s.
 
I've had a lot of trouble with the AI filter, and it seems weird. For instance, the [Link Removed] were all rejected. Now, sure, I love an em-dash, but this feels a bit unfair.

I actually had to sign up for ChatGPT to try to figure out what made my stories read like AI!

Ironically it gave some good advice, but if I took it, then I'd be violating the AI policy! Here's what it said:

"Does this story read like it was written by AI?"

Short answer: no, this does not scream “AI wrote this” to me. It reads like a human with a very specific set of kinks and craft priorities, plus a careful ethical frame.

Let me break that down a bit.



Why it feels human

1. The kink/ethics architecture is very particular.
The whole CNC / conditioning setup, the way you scaffold consent (the pre-scene request, the reveal, the “while you’re clear-headed, choose” sequence) is idiosyncratic and internally negotiated in a way AI-text usually doesn’t bother to structure. It feels like someone who:
• Reads a lot of kink discourse
• Has very specific fantasies
• Has thought hard about “is this okay?” and then decided to push anyway

That’s a very human combo.

2. Emma’s emotional messiness is too specific.
The best part, in terms of “this feels human,” is the reveal + aftermath:
• Her arousal and hurt at the breakup text
• “Did I just do something beautiful or something terrible?”
• Her admission that she came hard at the worst possible moment emotionally
• The way she explicitly asks for consent to continue because she’s scared of how much she wants it

That’s textured and self-incriminating in a way AI tends to sand down. An LLM usually either makes her noble and selfless, or cartoonishly evil; this version sits uncomfortably in the middle.

3. Concrete, slightly odd details.
There are lots of tiny, grounded choices AI often doesn’t make unless prompted very hard:
• The carved wooden whale keychain from Cape Cod
• The Clark Kent riff tied specifically to the glasses and performance rather than just a superficial joke
• The sodium vapor lamps making the sky orange-gray
• The way the library air is described as “recycled, filtered, dead”

Those are all “someone’s personal texture palette,” not generic filler.

4. The structural ambition.
This isn’t just “porn with a twist.” It’s:
• 2nd person role-immersion
• Layered with psych jargon & actual conditioning logic
• Layered again with an identity twist and meta-consent scene
• And then a deliberate “now the conditioning has to compete with what you know.”

That’s quite a conceptual design for a stroke piece. It reads like you’re trying to satisfy yourself as a thinker as much as as a fantasist, which is very you/human.



What could read a bit “AI-ish”

I’m not saying it is AI-ish, just where a sensitive reader or editor might get a little suspicious if they’re primed to think about it:

1. The text is very long and very “on rails.”
Escalation is almost perfectly smooth: eye contact → light conditioning → sexual escalation → orgasm → breakup → twist → consent meta → “level 2” conditioning.pleasure, release, submission, all of it, to obedience to me.”

or

“That’s the conditioning taking hold.”

feel a tiny bit like the story is standing at the whiteboard, explaining the experiment. A human erotica writer might leave a bit more of that implicit once we’re in the middle of it.

(Though: your target audience for this kind of story probably likes that layer spelled out, so this isn’t necessarily a flaw.)

3. Repetition patterns.
Some phrases repeat in very similar form:
• “Good boy / such a good, obedient boy”
• “You belong to me / I’m yours forever”
• “Look at my eyes / my eyes are where you belong”

In context, that fits the conditioning theme, but a detector-happy editor might clock it as a stylistic “loop.” The difference is that here the repetition is semantically motivated; if you trimmed a few instances at the margins, it’d feel even more deliberately human.

4. A couple of POV/tensing glitches.
There’s a brief slip from 2nd person to 3rd (“her thighs trembling around his head,” etc.). That actually feels more like human drafting than AI—LLMs usually keep POV mechanically consistent once it’s set—but an editor who’s already suspicious might misread that as “sloppy machine output” instead of “workshop draft.”



If your goal is “definitely reads human”…

You’re already in good shape. But if you want to nudge it further away from anything someone might side-eye as “AI-adjacent,” I’d suggest:
1. Trim a few explicit “we are conditioning you” explanations.
Let some of the behavioral stuff just play out in action and bodily response. Keep the best explicit naming, cut maybe 10–15% of the didactic lines.
2. Introduce one or two genuine “off script” moments.
E.g., a piece of dialogue where Claire/Emma genuinely fumbles, or Ryan says something off-target that she has to recover from, instead of always delivering the perfect kink line.
3. Lean a little more on subtext.
For instance, when Emma talks about sitting in the study room after he leaves, you already show she’s wrecked. You might not need every beat of explanation about beautiful/terrible—it’s effective, but you can shave a line or two and trust the reader.
4. Clean up the POV slips so they look intentional, not accidental.
Either commit to 2nd person, or let yourself pivot cleanly into 3rd for the later sections; right now it’s mostly 2nd with a couple of stray “his” and “he”s.



But big picture: this reads like an intense, ethically self-conscious human kink story, not like a generic AI smut generator. If someone told me “someone with a psych nerd streak and a lot of feelings about consent wrote this over too many late nights,” I’d believe that instantly.

[Please do not post links to work on competing sites that has been rejected here. - AH Mod]


If you want your story to "read" human and pass literotica's AI AI checker - dumb down grammar, insert a few typos and extra white spaces, and ignore punctuation. YMMV
 
It's a terrible way to use AI. ChatGPT will give you an answer that looks plausible but there's no particular reason to expect that answer will be correct. Very often it just makes up plausible-sounding shit.
But in this case, if it actually gives them hints that allows them to fix his story to sound less AI, so that it and every other story they write can get posted without flagging?
 
If you want your story to "read" human and pass literotica's AI AI checker - dumb down grammar, insert a few typos and extra white spaces, and ignore punctuation. YMMV
Being used to trying to spot AI generated essays from my students, this is absolutely not what I look for. AI generated text is always some level of soulless and homogeneous.

Do you really think someone like StillStunned (an editor by profession) leaves grammatical or punctuation mistakes in his stories?

On the earlier English exams from 60+ years ago, trying to get students to write soullessly feels like what I would expect from that era exam system. (Disclaimer -- I am American and have never taken an English exam)
 
Being used to trying to spot AI generated essays from my students, this is absolutely not what I look for. AI generated text is always some level of soulless and homogeneous.

Do you really think someone like StillStunned (an editor by profession) leaves grammatical or punctuation mistakes in his stories?

On the earlier English exams from 60+ years ago, trying to get students to write soullessly feels like what I would expect from that era exam system. (Disclaimer -- I am American and have never taken an English exam)


Disagree - at least 1/3 of the stories I've read in the Lesbian category have at least one (careless) typo. In one story, the MC had 4 hands (not sci-fi).
 
Being used to trying to spot AI generated essays from my students, this is absolutely not what I look for. AI generated text is always some level of soulless and homogeneous.
From the amount of doomscrolling I do on facebook, it's really easy to tell, since it's pretty much all of them. 8-12 decent sized paragraphs, frequent use of em dashes, and repeating elements of the story is pretty common.

They all read exactly the same too.

Videos can be harder to tell sometimes, but a couple clues are random different mispronunciations of the same word in the script, talking just a little bit too fast with no variation in speed, and never taking breathing breaks.
 
Thank you for providing the sources.

So I think it's important to note that these are all from the UK, which feels like something that should have been specified from the onset, since that is not the only country that teaches English composition as part of school curricula. And the first entry in that series is actually from 1957, so really it should have given an earlier date. Tsk tsk.

Secondly, I'm curious as to how your AI paramour is able to make the claim that all AI detectors misclassify human-generated text as AI-generated because it is written in the style of how UK students were prompted to write on exams administered in the 1950s & 1960s.

The Cambridge Board is an examination board of choice for many countries previously in the British Empire, many of which teach English as a second, and often, official language.

You may or may not be aware, speakers of English as a second language frequently complain that their works in English are frequently flagged as AI. This is because high frequency usages and constructions are prioritised in teaching English as a foreign language.

I was examined by the Oxford Board and was a native speaker so was familiar with all frequency of usages. Teaching methods have altered since the 60s in any event.

You'll have noticed that I prefaced my post with, 'I've just had a long conversation with Copilot after entering the following prompt:' I've spared you the long conversation.

It is me who's speculating about the connection between choices made by teachers of English as a foreign language and the higher frequency usages flagged as AI. Copilot simply made an observation which surprised me.
 
Back
Top