Self-editing using AI

I think that you have touched upon a bit of ignored reality: tools that aid us versus generative cheats.

We all employ any number of technological tools to write - from the word processing application with numerous features for formatting and designing the document, to spellcheck, or an app that inserts the desired HTML code for us. I consider these tools of the trade. They rely upon my decisions to implement and achieve the desired function. There are a number of other tools and resources out there, such a ProWritingAid, which provide analysis of the written word without forcing generative content for replacement. There are sites that, for example, analyze the reading level of your writing, (which can be helpful with targeting some of the readers here).

IMO, it boils down to using those tools for analysis only. No generative features at all.
Part of what makes a master is the perfect understanding and intimate familiarity with their tools. A master blacksmith understands his forge, his anvil, his hammer, and can tell from a glance the exact heat of his work piece. A master joiner can tell by sight whether something is true, and understands intimately how a particular piece of worked wood will change as it ages and both takes in or gives up moisture to the air. A master navigator understands her sextant and can tell her latitude to a degree or better simply by glancing at the angle of the sun at noon, or one of any number of stars at night.

LLMs take this away from us - they put up paper walls between us and the profound knowledge of years that underlies our achievements. Worse - they remove the need to be intimate with your workpiece, be it oil, steel or wood. A LLM is a six-degree-of-freedom cnc mill that can substitute its own "intent" based on things that other cnc mills have made; it will listen to your suggestions and then obey or ignore as capriciously as a cat. Sometimes it will do things that are subtly wrong in a way that only a master will understand... but soon there will be no new masters, because nobody will ever get to the point of learning mastery of anything except LLMs.

To struggle is to live, and if it is not worth struggling for it is not worth having.

I am human. I choose the fire, the iron and the anvil. It is harder, but the end result is more pleasing to me.
 
Part of what makes a master is the perfect understanding and intimate familiarity with their tools. A master blacksmith understands his forge, his anvil, his hammer, and can tell from a glance the exact heat of his work piece. A master joiner can tell by sight whether something is true, and understands intimately how a particular piece of worked wood will change as it ages and both takes in or gives up moisture to the air. A master navigator understands her sextant and can tell her latitude to a degree or better simply by glancing at the angle of the sun at noon, or one of any number of stars at night.

LLMs take this away from us - they put up paper walls between us and the profound knowledge of years that underlies our achievements. Worse - they remove the need to be intimate with your workpiece, be it oil, steel or wood. A LLM is a six-degree-of-freedom cnc mill that can substitute its own "intent" based on things that other cnc mills have made; it will listen to your suggestions and then obey or ignore as capriciously as a cat. Sometimes it will do things that are subtly wrong in a way that only a master will understand... but soon there will be no new masters, because nobody will ever get to the point of learning mastery of anything except LLMs.

To struggle is to live, and if it is not worth struggling for it is not worth having.

I am human. I choose the fire, the iron and the anvil. It is harder, but the end result is more pleasing to me.
By that logic you should be writing your stories on parchment with ink and quill. Quite modern of you to be employing a computer, Ms. Blacksmith. 😄
 
Part of what makes a master is the perfect understanding and intimate familiarity with their tools. A master blacksmith understands his forge, his anvil, his hammer, and can tell from a glance the exact heat of his work piece. A master joiner can tell by sight whether something is true, and understands intimately how a particular piece of worked wood will change as it ages and both takes in or gives up moisture to the air. A master navigator understands her sextant and can tell her latitude to a degree or better simply by glancing at the angle of the sun at noon, or one of any number of stars at night.

LLMs take this away from us - they put up paper walls between us and the profound knowledge of years that underlies our achievements. Worse - they remove the need to be intimate with your workpiece, be it oil, steel or wood. A LLM is a six-degree-of-freedom cnc mill that can substitute its own "intent" based on things that other cnc mills have made; it will listen to your suggestions and then obey or ignore as capriciously as a cat. Sometimes it will do things that are subtly wrong in a way that only a master will understand... but soon there will be no new masters, because nobody will ever get to the point of learning mastery of anything except LLMs.

To struggle is to live, and if it is not worth struggling for it is not worth having.

I am human. I choose the fire, the iron and the anvil. It is harder, but the end result is more pleasing to me.
You are lumping all AI tools into the same LLM generative content basket, and that is what I was attempting to dispel.

It simply isn't the reality.

An ASE certified mechanic who uses a diagnostic analyzer to isolate a problem with a car's engine isn't any less a "master" than his predecessors were.
 
An ASE certified mechanic who uses a diagnostic analyzer to isolate a problem with a car's engine isn't any less a "master" than his predecessors were.

A diagnostic analyzer is an expert system, not an AI. And the mechanic has spent years building and rebuilding all kinds of engines, and likely has an intuitive grasp of clearances, compression ratios and valve types that would appear arcane to a non-expert. The diagnostic analyzer is of no use without the mechanic; it is a symbiotic relationship. It is a more complicated version of ten thousand hours of knowledge, a torch, a set of shims and some wd40. And he'll have an apprentice that he's training up.

LLMs (and similar systems) are parasitic; as they do more, the end user does less, and the skills the user may have atrophy. LLMs differ from other "tools" in that they have the capacity to remove any need to understand the tooling and processes that underlies what they are doing. Point an AI system at a mammogram, it tells you you have (or don't have) breast cancer. There are people out there right now who are completely comfortable with that; I, on the other hand, would rather a human with 30 or 40 years of experience in mammograms looks at my scan and makes a call. They could both be right, or both be wrong, or who knows - but the human's decision tree is determinate, whereas the model underlying the AI system might have a weighting towards cheese on tuesdays that subtly informs its decision tree.

And most importantly - if the AI system is 'good enough' it will entirely replace the human element. And then, suddenly, nobody will train as radiography specialising in breast cancer - because there will be no space for them in an AI-breast-screening world. And when you ask the AI system about its corpus and decision-making steps - why, you're trusting it to give you a factual answer.

By that logic you should be writing your stories on parchment with ink and quill. Quite modern of you to be employing a computer, Ms. Blacksmith. 😄
My computer is not a thinking machine. I follow the tenets of the Orange Catholic Bible.
 
By that logic you should be writing your stories on parchment with ink and quill. Quite modern of you to be employing a computer, Ms. Blacksmith. 😄
Only if the skill she's practising were handwriting, or calligraphy. When it comes to storytelling, it's the craft of plotting and choosing the right words that matter, and for that it doesn't matter whether you use parchment and quill, a computer or your voice and hands.
 
I’ll repeat my standard thought:

AI as it exists today with large language models, exists based on scanning everything on the internet without permission, for training data.

What’s another word for taking things without permission? Theft.

In other words, AI as we know it is rooted in theft. Using and liking AI is the equivalent of buying stolen goods.

To add insult to injury, AI then presents its “answers” as if the AI tool provided it, not giving credit to where the answers really came from.

Using AI is like buying stolen goods then celebrating the low low prices.

It can be avoided. Scroll past the AI summary in your search engine. Don’t use AI, not out of being backward, but as a point of ethical pride.
 
As budding writers, we often feel like we're not ready to approach an editor. Or maybe we haven't reached the point where an editor can really dive into our work yet. So, what if we used AI to get some early feedback?

AI? I hear you say, is that not prohibited on Literotica? It is, and for good reasons.

But what if the AI acted more like a coach? A tool that points out weaknesses in your writing and challenges you to improve it yourself, thus the wording will still be your own writing.

You can achieve this with a specific prompt directive, such as: "Do not, in any case, suggest alternative wording or phrases." The AI will follow this rule, focusing on what should be improved rather than rewriting your work.

Once this is established, you can ask for feedback on specific, abstract concepts in your writing:
  • "Where is the style too choppy?"
  • "Where do you find clichés or overused metaphors?"
  • "Can you identify any plot holes?"
  • "Is the fantasy setting believable? Are the characters and plot consistent with that setting?"
  • "Are the characters portrayed consistently throughout the story?"
The AI might respond with feedback like:
  • "These two scenes could benefit from a more elaborate transition."
  • "Your dialogue here feels unnatural and sounds more like information dumping."
  • "You're using too much passive voice in this section."
It won't tell you how to rewrite anything. That part is entirely up to you. And this is where the real learning and practice happens.

However, there are caveats. AI models are often trained on vast amounts of "flowery" or over-the-top prose, so they might guide you towards a style that's full of clichés or opulent, irrelevant details.

Never follow AI advice blindly. Always ask yourself, "Do I really want this?" For example, the AI might suggest a transition is needed, but maybe you intentionally jumped from one scene to another for a jarring effect. Or maybe it flags passive voice, even though it might be a deliberate style choice, as in 'Pride and Prejudice'.

Finally, remember that an AI’s basic programming is to please you. If you don't prompt it correctly, it might praise even poor writing. On the flip side, if you ask it to be "harsh," it might find fault everywhere.

Writing prompts to get truly valuable feedback from an AI can be tricky, but when used as a coach—not a co-author—it can be a useful tool.
I haven't gone through all of the other posts responding to your suggestion, ... yet.

But you CAN use an AI to provide a general review or critique of your work-in-progress. In fact, I have found it to be quite useful, because the human reviewers I've asked in the past to give me beta-reader feedback are a "one & done" review. The same person can't read the revised draft a second time to provide a truly fresh view of the revisions. They are biased due to their previous impression of the story.

When working on my latest published story here on LitE ("A Band of Sisters and Brothers"), my first draft of the story back in May received the ChatGPT review basically saying "This is misogynistic crap which will alienate everyone!" It then went on to point out the story's flaws, including the one-dimensional characters, lack of agency, etc.

After my first MAJOR revision, the same ChatGPT came back with a fresh review effectively saying, "The story shows a unique perspective of life in the military." So, I continued to revise and request an AI "Provide a general review of the following erotic story: ..." (The 25K-word long story was broken into 5 parts for the AI reviews.) I then spent 6 weeks making revisions and asking for more reviews.

I think the final resulting story as published is FAR better than my initial first draft. [EDIT: And the final is DRASTICALLY different than my first attempt.] And I have those AI reviews to thank for the fresh set of "eyes" each time to tell me "That's crap" and yet still pick up on the subtle changes a human would have skimmed over on a second or subsequent review.

The one thing to never do with the AI is give it your story and say "Provide recommended changes ..." Merely take its points of "This character is one-dimensional," or "That scene does not flow with the rest of the story," and make your own changes to address the flaws.

Also, I found ChatGPT to be more critical and more difficult to work with on the erotic story reviews, because of the "rule" against erotica (It will do it, but sometimes delete the response within one minute after showing it to you.) Google Gemini provide more friendly reviews but was prone to mistakes by missing some of the key points in a 6K story part. In my story, it's first review was confused and completely MISSED the point of who died in the epilogue!
 
Last edited:
A diagnostic analyzer is an expert system, not an AI. And the mechanic has spent years building and rebuilding all kinds of engines, and likely has an intuitive grasp of clearances, compression ratios and valve types that would appear arcane to a non-expert. The diagnostic analyzer is of no use without the mechanic; it is a symbiotic relationship. It is a more complicated version of ten thousand hours of knowledge, a torch, a set of shims and some wd40. And he'll have an apprentice that he's training up.

LLMs (and similar systems) are parasitic; as they do more, the end user does less, and the skills the user may have atrophy. LLMs differ from other "tools" in that they have the capacity to remove any need to understand the tooling and processes that underlies what they are doing. Point an AI system at a mammogram, it tells you you have (or don't have) breast cancer. There are people out there right now who are completely comfortable with that; I, on the other hand, would rather a human with 30 or 40 years of experience in mammograms looks at my scan and makes a call. They could both be right, or both be wrong, or who knows - but the human's decision tree is determinate, whereas the model underlying the AI system might have a weighting towards cheese on tuesdays that subtly informs its decision tree.

And most importantly - if the AI system is 'good enough' it will entirely replace the human element. And then, suddenly, nobody will train as radiography specialising in breast cancer - because there will be no space for them in an AI-breast-screening world. And when you ask the AI system about its corpus and decision-making steps - why, you're trusting it to give you a factual answer.


My computer is not a thinking machine. I follow the tenets of the Orange Catholic Bible.
Missed the Point.jpg
 
You are lumping all AI tools into the same LLM generative content basket, and that is what I was attempting to dispel.

It simply isn't the reality.

An ASE certified mechanic who uses a diagnostic analyzer to isolate a problem with a car's engine isn't any less a "master" than his predecessors were.
All AI tools (currently) run on LLMs and are powered by theft. There is no distinction to be made.
 
OK so

It's true that LLMs are all the AI some people are familiar with, but there is a repeated misconception repeatedly being repeated, to the effect that all AI is LLMs.

That's not right.

It's true that for the purpose of writing, LLMs are practically all that matters, but there are other AI tools and systems besides LLM.
 
OK so

It's true that LLMs are all the AI some people are familiar with, but there is a repeated misconception repeatedly being repeated, to the effect that all AI is LLMs.

That's not right.

It's true that for the purpose of writing, LLMs are practically all that matters, but there are other AI tools and systems besides LLM.
Practically all that matters or...
 
The only absolutes are that there are no absolutes.
ProWritingAid's own documentation says that "most of its reports and tools analyze documents to provide suggestions without directly using LLMs."

What does that mean?

EDIT: i went to find the website address where I saw this claim and couldn't navigate back to it. I'm not trying to set up a strawman argument. Ignore the request.
 
Last edited:
As budding writers, we often feel like we're not ready to approach an editor. Or maybe we haven't reached the point where an editor can really dive into our work yet. So, what if we used AI to get some early feedback?

AI? I hear you say, is that not prohibited on Literotica? It is, and for good reasons.

But what if the AI acted more like a coach? A tool that points out weaknesses in your writing and challenges you to improve it yourself, thus the wording will still be your own writing.

If it were capable of doing this competently, it would be a very different thing from the "AI" products that actually exist. I'd encourage anybody thinking about doing this to read Amanda Guinzburg's essay about her experiences with GPT critique: https://amandaguinzburg.substack.com/p/diabolus-ex-machina

Finally, remember that an AI’s basic programming is to please you.

No, it's not. It does often give fawning responses, but its main objective is to imitate the kind of thing it sees in its training data.

What that means here is that it will say things that sound like critique, remixed from critique other people have made of other people's stories. When it tells you something like "your characters are compelling but their relationships need more development", it's not saying that because it's actually capable of analysing characterisation; it's telling you that because these are the kinds of things it's seen in other critiques and it "wants" to sound like it's giving critique.

You might as well go to one of the feedback threads here, look at feedback that was given for somebody else's story, and gaslight yourself into believing that it's feedback on your own work.

If you don't prompt it correctly, it might praise even poor writing. On the flip side, if you ask it to be "harsh," it might find fault everywhere.

If AI is praising "even poor writing", then the problem isn't that you prompted it wrong, it's that it doesn't fucking work.

Writing prompts to get truly valuable feedback from an AI can be tricky, but when used as a coach—not a co-author—it can be a useful tool.

I get so tired of this "you're prompting it wrong" line. If it had the reading comprehension skills required to give meaningful feedback on a story, surely it shouldn't have any difficulty understanding what you're asking it to do.
 
If AI is praising "even poor writing", then the problem isn't that you prompted it wrong, it's that it doesn't fucking work.
The obvious test to distinguish the two situations would be to feed a universally acclaimed piece of writing into the LLM, prompt it to be “harsh” and then see if it produces a meaningfully different critique to that of objectively bad writing.

Something tells me the results of such test wouldn’t be in LLM’s favor…
 
I bet that custom one-piece-at-a-time Cadillac would've handled sinosoidal deplanaration like a champ. The rotor arms back in those days were lousy with the kind of quasineutroid noise that negates deplanaration all together, sinosoidal or otherwise.
This made me wonder about something...

Screenshot 2025-09-04 at 7.55.17 AM.png
 
The AI engine can be configured not to use input for training. This is what I did.
Did you actually configure it not to use input for training? Or did you just prompt it "please don't use input for training" and then trust that it honoured your request?
 
The obvious test to distinguish the two situations would be to feed a universally acclaimed piece of writing into the LLM, prompt it to be “harsh” and then see if it produces a meaningfully different critique to that of objectively bad writing.
I did that and it thrashed the text. It's comments being ridiculously off the mark.
But it recognises most classical texts. If you feed it Hemingway, it will recognise it and not thrash it.
 
If AI is praising "even poor writing", then the problem isn't that you prompted it wrong, it's that it doesn't fucking work.
I had to do that with human editors too several times. Many human editors are cautious with criticism. They need to be encouraged to be more critical just like the AI.
 
Back
Top