Incredibly nice compliment from AI detection software

metropolinational

Wannabe Writer
Joined
Jul 10, 2025
Posts
185
So, I am trying to rewrite a whole chapter of my Botched series from scratch. It's been rejected for AI twice, and I just gave up and decided to start over. I decided to double check the latest version against an AI detector, and it told me it has a couple of red flags. Here is the one it thought was most problematic:
Extremely symmetrical emotional arcs in a single chapter
  • Starts with high anxiety / guilt, laughter release, serious sexual scene, cathartic orgasm, sharing / communal intimacy, religious gratitude / Psalm recitation
That complete “tension -> release -> laughter -> sex → sharing -> spiritual closure” shape in one chapter is suspiciously well-engineered for reader satisfaction. Detectors increasingly penalize “too perfect emotional journeys” in mid-length prose.
Of course, this is exactly what I meant to do. I built this chapter precisely for that emotional arc. So, nice work, me! But, will it get rejected again? Is the problem with my chapter that the exact chapter I am trying to write is inherently problematic, regardless of how many times I rewrite it? The chapter that I have been planning for months to write and I have meticulously built up to is too "suspiciously well engineered"?
 
AI lacks emotional depth in its writing; it parrots phrases in overblown ways, but doesn't generate any showing of emotions, she felt this or that, feeling like this or another. It's passive purple prose. The words indicating that what follows is happening break the reader's flow. It's the reason you show, not tell. It happens to be a natural way for many writers, telling rather than showing, but its the golden rule for AI. You have to tell the reader, in AI's world, they feel their heart pounding.
 
What checking tool are you trying to use? Because real ones like GPTZero (which have their own problems) aren't going to give you a faux-literary analysis like this.

This reads like a normal LLM being prompted to "analyze" a text sample, and it will churn out bullshit based mostly on how you worded your prompt.

"I suspect that this is written using AI, am I right?" will generate a completely different output than "I wrote this and I'm worried it will be flagged as AI, am I safe?" and both outputs are complete nonsense.
 
My whole point is that this is BS. The piece was written by me, so I knew up front that any "red flags" identified would be nonsense. However, the feedback has been incredibly consistent across multiple platforms.
What checking tool are you trying to use? Because real ones like GPTZero (which have their own problems) aren't going to give you a faux-literary analysis like this.
You are drawing a false distinction between "real" (the ones that give you sentence level analysis) and "fake" (the ones that read it and give you feedback). Both are fake, in that neither can distinguish between AI, human, and hybrid text.

That aside, yes, I used a traditional AI to give me feedback. I have used several "real" AI detectors, including the humanize one that merges all of them. None can provide me with any specific lines that come from AI, which is correct since there are none. That's why I reverted to getting overall feedback for the whole piece. What I posted is what it spit out as the biggest red flag.

I thought it was amusing.
 
I think a lot of us get frustrated with people who passionately believe that LLMs can do things they simply cannot. It’s like arguing with people who believe their horoscope. The only thing they are good at is sounding convincing to people who want to believe.
 
AI lacks emotional depth in its writing; it parrots phrases in overblown ways, but doesn't generate any showing of emotions, she felt this or that, feeling like this or another. It's passive purple prose. The words indicating that what follows is happening break the reader's flow. It's the reason you show, not tell. It happens to be a natural way for many writers, telling rather than showing, but its the golden rule for AI. You have to tell the reader, in AI's world, they feel their heart pounding.
You think only AI does that? :LOL:
 
Non sarcastic question, is there any use to asking AI for feedback? It has no emotion and emotion drives opinions on creative endeavors like writing, art, music, its all about how it makes a person feel, not what a program regurgitates by pulling up reviews of similar material.
 
Non sarcastic question, is there any use to asking AI for feedback? It has no emotion and emotion drives opinions on creative endeavors like writing, art, music, its all about how it makes a person feel, not what a program regurgitates by pulling up reviews of similar material.
AI is excellent at things like checking logical and character consistency, identifying plot holes, filling in realistic technical/regional/professional terminology and plain fact checking. All of those can be invaluable to a writer.

What I tell my students is to do that work OFF-TEXT, though, in order to avoid having the AI slop slip into your stew.

But yes, just the tasks I wrote above can save hours of work.
 
I think a lot of us get frustrated with people who passionately believe that LLMs can do things they simply cannot. It’s like arguing with people who believe their horoscope. The only thing they are good at is sounding convincing to people who want to believe.
Similarly, I also get frustrated with people who believe AI cannot do anything. Both sides are equally wrong.
 
Of course, this is exactly what I meant to do. I built this chapter precisely for that emotional arc. So, nice work, me! But, will it get rejected again? Is the problem with my chapter that the exact chapter I am trying to write is inherently problematic, regardless of how many times I rewrite it? The chapter that I have been planning for months to write and I have meticulously built up to is too "suspiciously well engineered"?

Remember, the warning is saying that it "AI could have written this", not "AI wrote this." So I'd take it as a sign that AI can construct story arcs like yours (although I'm doubtful).
 
Similarly, I also get frustrated with people who believe AI cannot do anything. Both sides are equally wrong.
Actually no, they're not. There are forms of AI that are capable of doing real things. LLMs are not among them.
LLMs are fundamentally incapable of many capabilities that people attribute to them, including even basic reasoning. It's just not there. Any reasoning you sense is coming from you, not it.

I'll just ask -- how many refereed papers on AI have you published?
 
I'll just ask -- how many refereed papers on AI have you published?
Appeal to authority is a logical fallacy, so I'm not going to tell you (but also because I don't want to become more doxxable).

A further logical fallacy is the non-sequitur, which you use above, twice:

That LLMs are fundamentally incapable of "many" capabilities "people attribute to them" does not suggest that LLMs aren't capable of doing real things. That's silly.

That LLMs are incapable of basic reasoning, sort of like a screwdriver or an ATM, does not suggest that LLMs aren't capable of doing real things. Again, that's silly.

I'm not going to engage in a back and forth here and derail the conversation. If you want to start a thread and challenge me to a debate on this issue, I would be happy to engage.
 
Okay be the person who says "I saw 17 well researched papers by renowned scientists on this public health issue, but I read someone's post on facebook that made sense to me that disagreed, so I think it's still an open issue."

You can take things out of context if you want, but I did start the thing that LLMs are capable of, which is sounding convincing to be people who want to be convinced. That is their only capability AS I SAID.

Getting feedback on something requires reasoning, which is the use case you brought up. If you don't care about reasoning, ask your ATM for feedback. It's more easily recognized as garbage.

I gladly walk away from this. Take the time to learn how they work and what they are capable of doing. And not capable of doing. I can provide lots of good intro material if you want.
 
I believe AI is a tool. Like many tools, it can be useful.

My imagination runs ahead of me when I think about this age of AI. I imagine writers felt something similar when typewriters arrived. Word processors. E-books. Self-publishing.

But AI is different.

It doesn’t just assist. It accelerates. It multiplies. It dwarfs what came before.

And abundance changes value signals.

When photography became cheap, oil paintings didn’t die. They became fine art.

When digital music exploded, vinyl didn’t disappear. It became tactile ritual.

When self-publishing flooded shelves, curated presses gained prestige.

AI creates abundance.

Clean structure. Competent prose. Emotionally legible narrative. Grammatically stable writing.

So what becomes rare?

Irregularity with intent.

Improper punctuation used for a pause that matters.
Breaks.
White space.
Words that blur meaning and amplify ambiguity.

Humanity provides texture.

And texture becomes currency.

Your errors become fingerprints.

I’ve decided to make my writing unmistakably my own.

Hand sculpted.
Carved.
Without wax.
 
Okay be the person who says "I saw 17 well researched papers by renowned scientists on this public health issue, but I read someone's post on facebook that made sense to me that disagreed, so I think it's still an open issue."
This is a fundamental misrepresentation of the appeal to authority fallacy. You sound like a smart guy, so I really have no idea why you would do this, but I suspect it's passion about this subject.

On the off chance that you don't know this already, let me explain: 17 well researched papers are evidence, regardless of who wrote them. Whether they are experts in the field or first year students, it's the DATA that constitutes the evidence. That is why peer review should be blinded, so that reviewers are not misled by titles and credentials. They should only be guided by the quality of the paper itself.

Now, a Facebook post is very unlikely to contain data. However, if it does, if methods are presented, if sources can be verified, then yes, I will take it as evidence too.

The likelihood of a Facebook post having all of the above and adding up to GREATER evidence than 17 well researched papers is infinitesimal, but it's not zero.

And none of it has anything to do with who wrote the papers or the Facebook post.
 
Similarly, I also get frustrated with people who believe AI cannot do anything.

It can't. Here on Lit at least.

Accept that the people who run this place are AI-phobic, and structure your writing process accordingly, or find a more tolerant place to publish. That's my advice.

Everything AI can do, humans can do. You say using AI saves (what was your phrase?) "hours of work." But if you enjoy what you do, it's not work. And if it is work? The effort you've put into it gives it greater meaning, greater depth. It has greater value because it came from your mind, rather than a computer.

You don't need to believe that, I suppose, but it's the way Lit views all this.

LLMs are capable of some things. None of them will make your writing better.

This.
 
Back
Top