A thread for examples of writing that was rejected for being generated by AI.

AG31

Literotica Guru
Joined
Feb 19, 2021
Posts
3,288
This thread is meant to collect examples of writing that has been rejected as AI. Just several paragraphs. Not links to whole stories. Maybe we can see a pattern.

@AwkwardlySet said,
There is no making an appeal to the tool Laurel is using. She certainly isn't an AI detection specialist, nor does she have the time to read such stories herself. She uses a tool (as far as we know), and the tool has its obvious limitations. So, fair or not, authors who get flagged need to find a way to get past whatever threshold Laurel has set in the aforementioned tool.

And @CatPerson replied,
That is the problem we have no idea what that thresholds or what to change. If the so called threshold is a stylistic one, than no matter how many times we rewrite parts, or the entire story, it will still be flagged since presumably we are writing it once again using the same style. This in my opinion is the worst case scenario, since at least myself, I am not too keen on changing my own style just to suit an algorithm

If on the other hand it's specific things, like repetition or some key words or word counts by phrase or paragraphs then we can work those out of our stories without changing the way we write. This would then have to be accompanied by some transparency on the part of the Lit staff by giving more information on why a specific story was rejected.

Here's the thread. And here's an example of what I'm talking about. Oh, dear. I can't find the original example of writing. Here's the critique, which has some quotes. I didn't think you could delete a post. A puzzle.
 
Last edited:
Sure I'll bite. Here is a fragment of my latest story flagged by the Ai detection tool. I wrote the story and work alongside @neuroparenthetical as my editor.
_____________________________________________________________________________________________

Señor Miguel stormed out of the bustling kitchen, his temper flaring. The clatter of pots and pans, the sizzling of meat and onions, and the hurried footsteps of the kitchen staff couldn't faze the imposing man. He headed straight for a bottle of tequila behind the bar, uncorking it with a flourish. He poured himself a shot and tossed it back, relishing the smoky burn as it trailed down his throat.

After a moment's rest, he began pushing his way through the crowded restaurant. Bottle in hand, his destination was a balcony table that commanded some of the best views of the picturesque Cabo coastline. Once there, he set down the bottle and two shot glasses. David was seated on the opposite side, smoking a fine cigar. His eyes were fixed on the glistening sea, and his thoughts were miles away. He barely registered Señor Miguel's arrival until the boisterous man clapped him heartily on the back.

"Ah, David, my friend!" Señor Miguel boomed, leaning back with a hearty laugh. "What's gotten into you? You're normally the life of the party, always with a new lady at your side. You usually remind me of my younger days, carefree and chasing the señoritas, but today you seem like a totally different person."

David turned to face the older man, his expression blank, not revealing the tempest of emotions churning inside. He'd long ago learned to hide his feelings -- especially the ones that still haunted him.

Señor Miguel, still grinning, sat down and poured two shots of tequila. "All this talk about the ladies reminds me: I have some juicy gossip to share with you. Just the other night, two of the many beautiful women you've brought here came back. Emily, I think, was the one you brought here the most. Beautiful girl, that one. And the other one, her name escapes me right now..."

"Claudia," David interjected, tossing back a shot of tequila.
 
I am hardly an expert, but I don't see much, if any resemblance at all with the way AI-generated text usually reads. Strange 🫤
 
I'll respond to both samples submitted so far. I'm being honest and trying to be helpful, not snarky or dismissive. I don't accuse anybody of anything.

If I were vetting these stories for AI, based on these passages I would be suspicious that they were generated in part by AI. The style of writing doesn't ring quite right to me. There are too many commas, and the passages are a bit over-written. There's a lack of sensible flow from one sentence to the next. For instance, in the first passage, Senor Miguel "stormed" out of a kitchen, but various kitchen sounds didn't "faze the imposing man." What does that mean? He's angry, so he's fazed. What does it mean to say that he's not fazed when from the first sentence he obviously is? One sentence later he uncorks a bottle of tequila "with a flourish," which also seems at odds with the mood of the first sentence, and then in the next sentence he "relishes" the "smoky burn." Why is he relishing something if he is angry? To my eye, there is an odd disjunction in the mood and meaning from one sentence to the next that suggests a lack of clear awareness of the impact of the words that I would tend to associate with AI.

In the second passage, in the first paragraph, the narrator says they've spent enough time in bowling alleys to be able to ignore the "sensory nightmare," but in the next sentence says "it was nearly impossible to ignore it completely." So which is it? There is a discontinuity of real meaning from one sentence to the next that suggests to me the possibility of AI: words are being put together, but they don't quite make sense together. And the sentences seem a little over-done. Too many phrases and clauses put together, something that as far as I can tell is common with AI.

I'm not accusing anybody of using AI here. If you say you are not, I believe you. But these passages make me wonder if newer authors are being influenced by AI tools and encouraged to write like AI even if they are not actually using it.
 
Another attempt by the OP to act like there is no issue here, and people are attacking the site, which is of course incapable of an error.
 
I'll respond to both samples submitted so far. I'm being honest and trying to be helpful, not snarky or dismissive. I don't accuse anybody of anything.

If I were vetting these stories for AI, based on these passages I would be suspicious that they were generated in part by AI. The style of writing doesn't ring quite right to me. There are too many commas, and the passages are a bit over-written. There's a lack of sensible flow from one sentence to the next. For instance, in the first passage, Senor Miguel "stormed" out of a kitchen, but various kitchen sounds didn't "faze the imposing man." What does that mean? He's angry, so he's fazed. What does it mean to say that he's not fazed when from the first sentence he obviously is? One sentence later he uncorks a bottle of tequila "with a flourish," which also seems at odds with the mood of the first sentence, and then in the next sentence he "relishes" the "smoky burn." Why is he relishing something if he is angry? To my eye, there is an odd disjunction in the mood and meaning from one sentence to the next that suggests a lack of clear awareness of the impact of the words that I would tend to associate with AI.

In the second passage, in the first paragraph, the narrator says they've spent enough time in bowling alleys to be able to ignore the "sensory nightmare," but in the next sentence says "it was nearly impossible to ignore it completely." So which is it? There is a discontinuity of real meaning from one sentence to the next that suggests to me the possibility of AI: words are being put together, but they don't quite make sense together. And the sentences seem a little over-done. Too many phrases and clauses put together, something that as far as I can tell is common with AI.

I'm not accusing anybody of using AI here. If you say you are not, I believe you. But these passages make me wonder if newer authors are being influenced by AI tools and encouraged to write like AI even if they are not actually using it.
While I did notice a couple of the things you mentioned, and while I do agree that they are somewhat awkward, how do you imagine that an AI detection tool can understand and recognize the absence of continuity and meaning? Once again, we are falling into the trap of finding some stylistic errors within the text that these authors post rather than focusing on some more obvious tells. I refuse to believe that a tool can pick up on the discontinuity of meaning, and there is no freakin way that Laurel actually reads these. The submission would at least need to get flagged first in order for Laurel to even consider checking the story herself, if she ever does check stories personally. But how the fuck does the tool flag the discontinuity of meaning?
Unless, of course, Laurel has true AI in her possession and she uses it to detect the work of these lame AIs. I imagine that, somewhere out there, @ElectricBlue is burning a candle now. :D
 
Noting that each of these is an excerpt, maybe 5-10% of a story that was submitted and rejected. AFAIK we have no way of knowing whether the "AI detector" found anything suspicious at all in these particular excerpts, or if it's responding to something elsewhere in the story.

While I did notice a couple of the things you mentioned, and while I do agree that they are somewhat awkward, how do you imagine that an AI detection tool can understand and recognize the absence of continuity and meaning?

Exactly. If we could build detectors that could identify these attributes, we'd be able to build AIs that produce them.

If I were vetting these stories for AI, based on these passages I would be suspicious that they were generated in part by AI. The style of writing doesn't ring quite right to me.

I agree that the things Simon's identified can be hallmarks of AI-written text, and if I were editing these stories I'd flag them for revision. But I share AS's doubts that this is what the "detector" is actually picking up on, and I definitely wouldn't be confident in flagging these excerpts as AI-written.

I'd also note that it's easy to rationalise things in hindsight and say "ah, I can see why this would've been flagged" when we know what the outcome was. It would be more meaningful to do a blinded test here: put up excerpts from rejected stories, alongside other excerpts from non-rejected stories by new authors, without saying which is which, and see whether authors here can reliably pick which one is flag-worthy.
 
For instance, suppose I were to tell you that this is an excerpt from a story rejected by the AI detector:

Martin was, in most ways, a good husband: attentive, loving, and supportive of her career. In time, though, Erin learned that Martin also was jealous, controlling, and short-tempered. The one thing that always stirred him to anger was knowing or suspecting that another man was looking at his wife. As a result, Martin always was nagging and badgering her about what she was wearing. He wanted her always to look good for him, but didn't want any other man to see her.

Given that knowledge, you might go looking for small inconsistencies. You might question how Martin can be "jealous, controlling and short-tempered" if in fact there is just one thing that reliably angers him; you might question how he can be described as "loving" if he's "always nagging and badgering her".

Later:

Erin found it difficult fully to enjoy his compliments when so often there was a warning in them. Then she heard the TV pop on and the sound of ballpark cheering wafting through the house. She'd lost her husband to baseball for the rest of the night.

Above we were told that he's "attentive, loving", but here we're shown that she's chafing because he's ignoring her for a baseball game. I could talk about how this kind of small inconsistency betrays an AI's lack of understanding of what it's writing.

But actually these excerpts come from a story published by Simon in 2017, long before GPT or "AI rejections" were a thing here. These are just the minor imperfections that arise from human authors being fallible (I deliberately picked one of Simon's earliest stories).

Or they might even be deliberate. It's possible that Simon intentionally set up a contradiction between Martin's initial description as "attentive" and this later scene, in order to convey a character who's coming to realise that her husband doesn't have the virtues she's been crediting to him.

My point here isn't to trash Simon's writing. One could probably do something similar with just about any author's first stories, including my own; indeed, I had to edit out a continuity error in my very first story here after a reader pointed it out.

My point is just that if we go into a story with the knowledge that it was rejected as AI-written and looking for reasons why, we will find them, because we're good at finding reasons for things. But it doesn't mean that these were the detector's reasons for flagging it, or that the story actually was AI-written. (Not that Simon or anybody else here asserted the latter for these excerpts.)
 
Reading the examples, I'll high-handedly declaim that the tool being used is shit.

AI doesn't do that. It doesn't generate prose with any narrative drive or emotive POV.

The tool appears to be flagging stylistic ticks, generic or cliche phrasing or some similar nonsense, that someone decided were tells.

Hopefully it's freeware; otherwise, Laurel's due a refund.
 
For what it's worth, undetectable.ai found the first snippet to be 50% AI. It found the second snippet to be 100% human, which does not mean that every part of the story would register as human.
 
Three or four years ago, if someone wandered in and said, "My story's not doing very well, I'm getting a bunch of negative comments, what can I do to make it better?" a whole bunch of people would have jumped in with a stack of good writerly advice. Such as: use active voice over passive, vary the length of your sentences, use lively dialogue, show don't tell, don't do IKEA plug A into slot B type narrative, etc, etc, etc; and everyone would have nodded, saying, yep, that's really good advice.

But now, with AI being the bugaboo, and some folk offer up exactly the same advice, there are others who say, "No, you can't say that, the guy doesn't need to change what he's doing, that's his style, he doesn't need to change a thing."

You need to make your minds up.

My gut feeling, to be brutal, after read the various samples people are posting, is that Laurel is possibly skimming the content with her own eyes, and is thinking - as @SimonDoom does up above - this just doesn't read right to me, and maybe there's no detector being used at all. Given the amount of poor writing she must have read over the years, I'd say she'd be a pretty good judge of that.
 
This site has not addressed the AI issue at all. No interest in changing anything because they don't want to.

Its also the site who is now going to implement a choose your own adventure story format, which is complicated and going to be a lot of work because they want to.

But remember, adding a bi-sexual category many people actually wanted is too much work.

Its pointless to keep discussing this.
 
For instance, suppose I were to tell you that this is an excerpt from a story rejected by the AI detector:



Given that knowledge, you might go looking for small inconsistencies. You might question how Martin can be "jealous, controlling and short-tempered" if in fact there is just one thing that reliably angers him; you might question how he can be described as "loving" if he's "always nagging and badgering her".

Later:



Above we were told that he's "attentive, loving", but here we're shown that she's chafing because he's ignoring her for a baseball game. I could talk about how this kind of small inconsistency betrays an AI's lack of understanding of what it's writing.

But actually these excerpts come from a story published by Simon in 2017, long before GPT or "AI rejections" were a thing here. These are just the minor imperfections that arise from human authors being fallible (I deliberately picked one of Simon's earliest stories).

Or they might even be deliberate. It's possible that Simon intentionally set up a contradiction between Martin's initial description as "attentive" and this later scene, in order to convey a character who's coming to realise that her husband doesn't have the virtues she's been crediting to him.

My point here isn't to trash Simon's writing. One could probably do something similar with just about any author's first stories, including my own; indeed, I had to edit out a continuity error in my very first story here after a reader pointed it out.

My point is just that if we go into a story with the knowledge that it was rejected as AI-written and looking for reasons why, we will find them, because we're good at finding reasons for things. But it doesn't mean that these were the detector's reasons for flagging it, or that the story actually was AI-written. (Not that Simon or anybody else here asserted the latter for these excerpts.)

I think this is a perfectly fair, and rather clever, response to my point, and I agree we're obviously likely to be influenced by knowing ahead of time that something was rejected as influenced or written by AI.

But I'll note that I just copied the larger passage from which this excerpt in my story came, and plugged it into a free AI detector, and it told me there was a 1% chance that it was written by AI.

I plugged in CatPerson's passage, and the same AI detector said there was a 26% chance of AI generation.

When I plugged in PortlyPenguin's passage above into the AI detector, however, it said there was only a 1% chance of AI generation.

So I'm back to square one: I don't know how these detectors work, and I have no opinion about their accuracy, but I know, perhaps without being able fully to articulate why, that there are some texts that SEEM to me more like something that would be AI-generated than other texts. I think authors who are having some difficulties with this issue would do well to at least try heeding some of the advice they are getting, tinkering with the writing, and seeing if they can get by the AI gatekeeper, however it works.
 
I think authors who are having some difficulties with this issue would do well to at least try heeding some of the advice they are getting, tinkering with the writing, and seeing if they can get by the AI gatekeeper, however it works.
But Simon, that's their natural style.
 
I'll respond to both samples submitted so far. I'm being honest and trying to be helpful, not snarky or dismissive. I don't accuse anybody of anything.

If I were vetting these stories for AI, based on these passages I would be suspicious that they were generated in part by AI. The style of writing doesn't ring quite right to me. There are too many commas, and the passages are a bit over-written. There's a lack of sensible flow from one sentence to the next. For instance, in the first passage, Senor Miguel "stormed" out of a kitchen, but various kitchen sounds didn't "faze the imposing man." What does that mean? He's angry, so he's fazed. What does it mean to say that he's not fazed when from the first sentence he obviously is? One sentence later he uncorks a bottle of tequila "with a flourish," which also seems at odds with the mood of the first sentence, and then in the next sentence he "relishes" the "smoky burn." Why is he relishing something if he is angry? To my eye, there is an odd disjunction in the mood and meaning from one sentence to the next that suggests a lack of clear awareness of the impact of the words that I would tend to associate with AI.

In the second passage, in the first paragraph, the narrator says they've spent enough time in bowling alleys to be able to ignore the "sensory nightmare," but in the next sentence says "it was nearly impossible to ignore it completely." So which is it? There is a discontinuity of real meaning from one sentence to the next that suggests to me the possibility of AI: words are being put together, but they don't quite make sense together. And the sentences seem a little over-done. Too many phrases and clauses put together, something that as far as I can tell is common with AI.

I'm not accusing anybody of using AI here. If you say you are not, I believe you. But these passages make me wonder if newer authors are being influenced by AI tools and encouraged to write like AI even if they are not actually using it.
Grammar and writing are not my areas of expertise. Comma placement can be kind of tricky for people, when to use and not use them. I get what you're saying about seeing a lack of continuity at certain points, but I'm not sure that's a good indication of AI or that the writers were influenced by AI writing. I've come across a lot worse. I think the writers just needed to do a better job at explaining some of those things or wording them more clearly. The character in the bowling alley was very anxious, making them more sensitive, which caused the sensory nightmare in the alley bother them more than it usually would. I think they were going for something like that. I've had an experience kind of like that, of walking into a place like an arcade and having my anxiety go up another notch when normally I'm not all that bothered it.

That said, I could see how someone might set up the AI detector to key on such things, even though human writers can make the same kind of mistakes. Definitely worth considering.
 
Last edited:
I've always thought the common denominator in AI-accused pieces of writing was fairly obvious: they are written in simple, proper, and well-trodden English. They follow the conventions English is meant to follow, in that they represent the sort of writing teachers would love. Often the inner monologue maintains this style, usually for only a couple sentences at a time between pieces of physical action. I'm not saying that writing like this is a bad thing - because by many accounts this is how English is meant to be written - but it is similar to how an AI would craft prose (at least in its micro-level syntax; I'm sure the stories themselves are more sophisticated than AI).

Let's take the first example.
Señor Miguel stormed out of the bustling kitchen, his temper flaring. The clatter of pots and pans, the sizzling of meat and onions, and the hurried footsteps of the kitchen staff couldn't faze the imposing man.
I would never write this. I simply don't write in this proper-English format. I wouldn't state that a temper was flaring, or that the man was imposing. If I re-wrote this myself, I'd probably use more sentences. I might hang some dubious clauses in there to vary up the pacing. This isn't to say that the original prose is wrong. It's just a certain style, and it's this style which Laurel's flaggers seem to take issue with. If you told me the two passages shared near the start of the thread were by the same author, I probably wouldn't argue otherwise.

The problem comes from actually dealing with the issue. People should be able to write how they want to. By many accounts, AI writing today is similar to older, more traditional fiction. It's not fair to ask people to change their perfectly valid writing style to pass a check. It's equally not fair to let anything through and open the floodgates to truly AI-generated stories.

We are stuck in the mud. And the mud will probably only get deeper as AI gets better.
 
So what exactly is the point of taking a small excerpt of a full story, when all of it could've tripped the fabled AI detector, and trying to draw some conclusion about the remaining 90% of the story we don't get to see? Based on an excerpt that's deliberately chosen by the author, no less, with implicit or explicit bias towards picking the part that he or she thinks does not sound AI-like?

You are then looking at those fragments and draw conclusion about the entire review process, of which the so-called AI detector is likely just a small part. And while the doing so, you commit fallacy after fallacy, such as bringing up stories that have been approved previously as a justification that these fragments should be perfectly okay now. I'm really skeptical there's going to be anything productive to be gained from these wild speculations.

But hey, it's fun! So here's my take what actually happened to the samples being presented:
  1. There was something else in the story that highlighted it as needing manual review. Maybe "ten year old" is a phrase that earlier refers to the tequila that Senor Miguel uncorked with flourish? Maybe there are three commas in a row somewhere (,,,) that tripped the low-pass filter that looks for obvious typos? Whatever it was, the story was flagged for Laurel to look at.
  2. She read through a paragraph and found it suspiciously stilted, like some here did. She then pushed it through the magical-ai-detector.pl (because let's face it, it is a Perl script, that's probably 2k lines long and patched over several years) and it came up as 45% AI generated or something, based on the overall content of the story.
  3. She tossed the submission back with an AI rejection slapped on it.
Is it plausible? Sure. Did it actually happened like this? Probably not. Will we ever knew? Not unless we get to see that awesome Perl script :)
 
We are stuck in the mud. And the mud will probably only get deeper as AI gets better.
I'm not willing to take a punt on what AI text will look like in five years, certainly not ten. It might well be coherent from beginning to end (which it isn't at the moment), and it might be able to construct a story.

It will be up to the integrity of AI users (I'm never going to acknowledge it as "writing" done by "authors" because in my mind it never will be) to declare it as a machine work or pretend it's their own original work, because I think you're right, it will only get "better". But I can't see how it can ever be original, if the whole premise of the training is to predict what comes next, based on everything it's been trained on (what came before).

But I reckon the junk to good ratio will be predominantly junk, because let's face it, that's pretty much what we've got here already...
 
But I can't see how it can ever be original, if the whole premise of the training is to predict what comes next, based on everything it's been trained on (what came before).
Exactly. I'm no expert, but as I understand it, we can't have truly creative AI without building that creativity into the base model. ChatGPT won't suddenly become creative with an update, because the generative process we have now is fundamentally uncreative. You have to work backwards to work forwards to achieve true originality. And it'd be a big deal if we were successful. Because if your machine is creative, who's to say it's not sentient?

I think AI is going to be more trouble than it's worth in most areas of society. Maybe it'll cause the apocalypse. Maybe it'll just be a constant nuisance, like a stone in your shoe. But again, I am no expert. I just write shit and drink tea! And sometimes I really strain my brain to think about the future (briefly).
 
I think AI is going to be more trouble than it's worth in most areas of society. Maybe it'll cause the apocalypse. Maybe it'll just be a constant nuisance, like a stone in your shoe. But again, I am no expert. I just write shit and drink tea! And sometimes I really strain my brain to think about the future (briefly).
My understanding is that it's incredibly power hungry, doing all that bit crunching. Surely the guys designing it have put in an off switch...
 
My understanding is that it's incredibly power hungry, doing all that bit crunching. Surely the guys designing it have put in an off switch...
I'm sure they have. But it's a very uneconomical off switch, I'm sure you'll understand... 🤑

Edit: I saw that Apple is planning a lot of AI stuff in their coming iOS update. If you have an iPhone, you'll soon be able to ask ChatGPT stuff straight from Siri. Take that as you will. (maybe this is already out? I don't have automated updates turned on)
 
My gut feeling, to be brutal, after read the various samples people are posting, is that Laurel is possibly skimming the content with her own eyes, and is thinking - as @SimonDoom does up above - this just doesn't read right to me, and maybe there's no detector being used at all. Given the amount of poor writing she must have read over the years, I'd say she'd be a pretty good judge of that.

Aww, EB, I knew tagging you would be awesome. While what you are saying is what is expected from a fanboy, take a look at what you are claiming. We all know that there are over a hundred daily submissions, and you are saying that she checks each of those submissions personally? Seriously? In a reply to someone's PM, Laurel even said something like "Your work keeps coming back as AI-generated." I would say that "keeps coming back" implies that she is using some tool for checking.

Three or four years ago, if someone wandered in and said, "My story's not doing very well, I'm getting a bunch of negative comments, what can I do to make it better?" a whole bunch of people would have jumped in with a stack of good writerly advice. Such as: use active voice over passive, vary the length of your sentences, use lively dialogue, show don't tell, don't do IKEA plug A into slot B type narrative, etc, etc, etc; and everyone would have nodded, saying, yep, that's really good advice.

But now, with AI being the bugaboo, and some folk offer up exactly the same advice, there are others who say, "No, you can't say that, the guy doesn't need to change what he's doing, that's his style, he doesn't need to change a thing."

Well, I agree to an extent but keep in mind that those authors back then had asked for feedback and advice on how to improve. No one was forced to accept those suggestions. No one was forced to implement them into their writing or face rejection from a story administrator. Yet these authors here are forced to do something or face rejection. That makes a hell of a difference. Also, their stories aren't being rejected because they are badly written. They are being rejected because to some tool they look like they were written by an AI. So as much as I always support every conversation about improving, I can't help noticing that these authors are being told to improve for the wrong reasons.

But let's even say that this attitude is justified, because improvement can't be bad, right? But the truth is that we have absolutely no idea how much would all our advice on how to "write better" help their stories pass the test. We don't have the parameters in this case. All we have is comparing and guessing.
 
Aww, EB, I knew tagging you would be awesome. While what you are saying is what is expected from a fanboy, take a look at what you are claiming. We all know that there are over a hundred daily submissions, and you are saying that she checks each of those submissions personally? Seriously? In a reply to someone's PM, Laurel even said something like "Your work keeps coming back as AI-generated." I would say that "keeps coming back" implies that she is using some tool for checking.
It's been obvious for at least a decade that a word-bot alerts for under-age and non-con key words, just as examples, and grammar and punctuation pings likewise. It doesn't take much imagination to think that a machine based flag puts crap in front of Laurel, who could make a judgement in ten seconds, using human eyes. I've never said, "She reads every word."

I only need to read two or three sentences to judge when to back-click on a story - it's not hard to spot junk.

If you're so clever, how do you think the stories are vetted? Tea-leaves?
 
Back
Top