Using AI as a reviewer to improve my writing

You've been caught by your own game then. You've assumed people have been paying full attention, when all you've done is taken up space with junk and lost any credibility you might once have had. Which is a shame, because before you got seduced by the toy you had something interesting to say.

Caught by my own game?

That’s all you and your perceptions. I’m sharing the AI prompts and responses as related to the subject of this thread - demonstrating what it can and cannot do for the sake of the conversation.

I don’t really have a target audience here except that I am arguing against underestimating AI’s capabilities.

Personally for me it can be far more helpful for projects than the hardcover encyclopedias I had for reference as a kid. As a musician and artist of other sorts I understand how disruptive it is. Do you ever think of what recording and radio did to the market for live musicians?

Do you have a suggestion on how to counter progress? Something besides your recommendations for personally rejecting it?


An analogy: I have a few old guitar tube amplifiers. They are said to have the best tone and provide a feeling of pushback as you play that digital emulation could never copy. That changed a few years ago when digital technology caught up. Yes it’s still digital. No, it doesn’t have real warmth. Can you tell the difference by ear? Possibly live but not in a recording… 🤔


My opinion on AI doesn’t matter. It’s a fact that the quality of its response is improving from version to version and it is already good enough to save businesses money and to fool people or provide satisfactory results for many purposes now.

I think our world is coming to a crisis point with AI and robotics. Quantum computing is on the near horizon and will accelerate processing power and development in ways yet to be determined. It’s fascinating to me and it could easily become terrifying.


So, what “credibility” have I spent by sharing aspects of something that will continue to drastically affect many parts of our world?
 
Yes, a good point. The other reason is the fear of being boring because of too much detail. Or the doubt, that the observations, that I find striking are irrelevant to others.

There are ways to spellbind, though, even with excessive detail. Tolkien is an example: the man was obsessed with botany, but he was aware that not all readers were. Yet he felt like he had to include a lot of plant stuff in order to more vividly describe his world, hoping to make it real in his readers' imaginations.

So? He chose a style that, in part, stemmed from his training in northern European poetry: he included all the details, but he wove them into sentences that read lyrically, rhythmically, so that even a reader who couldn't tell a rose from a rhododendron could still enjoy the poetic nature of the prose, meaning it was still an immersive experience. Just in a different way.
 
Caught by my own game?

That’s all you and your perceptions. I’m sharing the AI prompts and responses as related to the subject of this thread - demonstrating what it can and cannot do for the sake of the conversation.

I don’t really have a target audience here except that I am arguing against underestimating AI’s capabilities.

Personally for me it can be far more helpful for projects than the hardcover encyclopedias I had for reference as a kid. As a musician and artist of other sorts I understand how disruptive it is. Do you ever think of what recording and radio did to the market for live musicians?

Do you have a suggestion on how to counter progress? Something besides your recommendations for personally rejecting it?


An analogy: I have a few old guitar tube amplifiers. They are said to have the best tone and provide a feeling of pushback as you play that digital emulation could never copy. That changed a few years ago when digital technology caught up. Yes it’s still digital. No, it doesn’t have real warmth. Can you tell the difference by ear? Possibly live but not in a recording… 🤔


My opinion on AI doesn’t matter. It’s a fact that the quality of its response is improving from version to version and it is already good enough to save businesses money and to fool people or provide satisfactory results for many purposes now.

I think our world is coming to a crisis point with AI and robotics. Quantum computing is on the near horizon and will accelerate processing power and development in ways yet to be determined. It’s fascinating to me and it could easily become terrifying.


So, what “credibility” have I spent by sharing aspects of something that will continue to drastically affect many parts of our world?
I can't speak for anyone else, but I have started visually skipping past your posts, Alex. Scrolling past because I'm not interested in what your prompts generated. There's a non-zero chance I scrolled past some of your actual opinions, which is the risk.

I would be grateful if you stopped posting those, but who am I to tell anyone else how to conduct themselves?
 
I can't speak for anyone else, but I have started visually skipping past your posts, Alex. Scrolling past because I'm not interested in what your prompts generated. There's a non-zero chance I scrolled past some of your actual opinions, which is the risk.

I would be grateful if you stopped posting those, but who am I to tell anyone else how to conduct themselves?

Thanks for saying your peace without being rude about it. 👍

From here out if I include an AI response I will put it in a quote window.
 
Okay, to answer the question asked, not what we think was asked, here I go. To use AI to read and evaluate your writing and adjusting your writing (as in you rework it yourself) in no way violates the rules here. As long as you are rewriting what you wrote yourself, you're good in my book.
 
I can't speak for anyone else, but I have started visually skipping past your posts, Alex. Scrolling past because I'm not interested in what your prompts generated. There's a non-zero chance I scrolled past some of your actual opinions, which is the risk.

I would be grateful if you stopped posting those, but who am I to tell anyone else how to conduct themselves?
+1. I've enjoyed conversations with Alex in the past and hope to do so in the future, but I have zero interest in the "I asked AI to respond to your post and here's what it wrote" genre of reply. I come here for human conversation; if I want to hear from GPT, I know where to find it.

This next bit isn't directed at Alex, just an observation of something I'm seeing elsewhere: some folk seem to be using "here's what GPT said about XYZ" as a high-tech version of the old "some people might say..." gambit. That is, they want to be able to present an opinion without being accountable for it. At my work, if somebody wants to use GPT to write a report or whatever, I expect them to take exactly the same responsibility for the content that they would if they had written it themselves; if it's inaccurate or misleading or offensive, "that was the AI not me" is no excuse. I apply that standard here too: own it, or don't post it.

(None of this applies to the situation where people are posting an AI-written excerpt in order to discuss what it tells us about AI writing etc. etc. - I'm just talking about the "here is an AI-written opinion which I may or may not agree with" style of post.)
 
+1. I've enjoyed conversations with Alex in the past and hope to do so in the future, but I have zero interest in the "I asked AI to respond to your post and here's what it wrote" genre of reply. I come here for human conversation; if I want to hear from GPT, I know where to find it.

This next bit isn't directed at Alex, just an observation of something I'm seeing elsewhere: some folk seem to be using "here's what GPT said about XYZ" as a high-tech version of the old "some people might say..." gambit. That is, they want to be able to present an opinion without being accountable for it. At my work, if somebody wants to use GPT to write a report or whatever, I expect them to take exactly the same responsibility for the content that they would if they had written it themselves; if it's inaccurate or misleading or offensive, "that was the AI not me" is no excuse. I apply that standard here too: own it, or don't post it.

(None of this applies to the situation where people are posting an AI-written excerpt in order to discuss what it tells us about AI writing etc. etc. - I'm just talking about the "here is an AI-written opinion which I may or may not agree with" style of post.)


This thread is specifically about AI responses and any value its feedback may have for an author. Pontificating only goes so far, and while I understand and agree about AI responses not normally being welcome in most discussions, in this case AI response is the subject in question.

Is there a better way to find out how AI may respond than seeing how it actually reacts to different prompts?
 
Last edited:
This thread is specifically about AI responses and any value its feedback may have for an author. Pontificating only goes so far, and while I understand and agree about AI responses not normally being welcome in most discussions, in this case AI response is the subject in question.

Note the "to your post" part of that sentence. I have no issue with "I asked GPT to critique my story, here's what it wrote, let's discuss whether it's useful feedback".

My issue is more with something like this post, where a human puts in the time and effort to give their two cents' worth on a topic, and somebody replies by asking an AI to respond to it and then pasting the response.
 
Note the "to your post" part of that sentence. I have no issue with "I asked GPT to critique my story, here's what it wrote, let's discuss whether it's useful feedback".

My issue is more with something like this post, where a human puts in the time and effort to give their two cents' worth on a topic, and somebody replies by asking an AI to respond to it and then pasting the response.

That was a demonstration of how it can respond to writing in an appropriate manner, not just random words with no context, and it can do it with a particularly requested style.

It chooses its next words based on specified criteria, not simply which word is the most likely, but which word would be most likely in the style of Mark Twain, Rodney Dangerfield, or as I also prompted, it can choose its next words based on the perspective of “a college English professor who is resentful and hypercritical of AI and refuses to see any benefits of its use.”

It responded in appropriate character and perspective. I could have said it would do so but instead I had it do so - thereby showing rather than telling my point.

The response is here for people who don’t have an AI app to evaluate for themselves.

If someone wants an AI review of their writing they can request a literary review, a fact check, an evaluation of verisimilitude, a psychological evaluation of the situation or of a character, or they could request a review in a style likely of someone from their target audience.


Yes it’s simple and fast and requires little effort, but it is versatile and readily available even when another person is not.
 
Last edited:
There is a tool called Sudiowrite that has a plug that reads and critiques your work. I know this because a writer I know uses Sudiowrite and talks about how "GREAT" it writes. I've read her stuff from it and in my opinion it isn't so great, but that's just me.
This thread is specifically about AI responses and any value its feedback may have for an author. Pontificating only goes so far, and while I understand and agree about AI responses not normally being welcome in most discussions, in this case AI response is the subject in question.

Is there a better way to find out how AI may respond than seeing how it actually reacts to different prompts?
 
Last edited:
That was a demonstration of how it can respond to writing in an appropriate manner, not just random words with no context, and it can do it with a particularly requested style.
...
I think you might have fallen prey to the thin layer of show an AI engine can put on.
Yes it can mimic a certain style and it can provide reviews of stories. And you might think, "that is impressive. This machine is really smart."
But when you request it to be more precise, to really get to the point, that is when you begin to notice the cracks in the laquer.
It does not get more precise. Instead it starts to repeat itself, to contradict itself, answer evasively.
It is like conducting an exam with a student who relied on rote learning. The first answers sound perfect, but as soon as you ask new unexpected questions they falter.

Take the concept of "emotional depth". Just add more words like "feels" or "inner turmoil" or "questioned herself" and suddenly the Ai thinks you have emotional depth.
 
Last edited:
I am amazed how many thoughtful reponses and debate this thread has generated. However my original question has not been answered. Can AI analyze a text and provide suggestions that help me become a better writer. I asked it because I was unable to find human reviewers for my attempts.
So it boils down to the simple test of having an Ai review a story (or fragment) and evaluating the answer for its usefulness. So instead of speculating every author can take a fragment of a story and have it reviewed by an AI.
So I used a fragment of a stiry of mine (https://www.literotica.com/s/a-whore-for-one-evening). The prompt was:
the following text is a fragment of an erotic role playing fantasy story for adults. please review the folliwing text for clarity, emotional depth, how it engages readers and weak wording.
Answer:
The escort's emotional state is almost entirely absent. Is she feeling apprehensive, excited, empowered, or submissive? You could add internal dialogue or reflections to show her processing the situation.
The tone feels somewhat flat. It would be more engaging if there was a balance between the provocative nature of the encounter and the emotional tension or complexity between the two characters.

Both aspects are valid. But do I want to depict a role-playing scenario or a persons reflection about partaking in it.
And maybe the tone is intentionally flat. A flat tone might he appropriate for role-playing an escort, for the humiliation of it.

And in the end the AI does not help in evaluating if I have reached any of my objectives.
 
Last edited:
I'm left wondering* how the people advocating AI analysis feel about horoscopes. Lots of people seem to believe that astrology is legit, and for those that don't, they ascribe the same techniques (or gimmicks) to its appeal as to AI: a suite of generalized assumptions and advice that can often be arranged to seem meaningful to someone's personal situation, because most people's situations share a lot of commonalities.

*That's a rhetorical device, to be honest. I'm not actually wondering. I don't really care.
I had this exact idea when reading some of OP’s statements about how they use AI for self-review. AI output might as well be a horoscope, a Tarot reading or the like, and, with that in mind, is useful in roughly the same way as how someone might get value out of the nudges of perspective one can get from randomized generic boilerplate about one’s circumstances, goals or (as the case may be) creative work.
 
Last edited:
That was a demonstration of how it can respond to writing in an appropriate manner, not just random words with no context, and it can do it with a particularly requested style.

Ah. It wasn't at all clear to me that that was the intention; I might've missed something in that discussion but I wouldn't have guessed anybody here would still need to be convinced that AIs can imitate style, and it's not obvious to me why this would be a useful thing in feedback - how is it more helpful to be told "your story needs stronger characterisation" in the style of Mark Twain or whatnot, as compared to the default?

Unless there was something I missed, "Random words with no context" again doesn't seem like something that needed debunking. LLMs certainly do incorporate randomness, but the nature of that randomness is more like a Markov chain than tipping a bag of words on the floor.

Without it being obvious that these were things that needed to be argued, and with the choice to use somebody else's post as the input, it came across - to me - much more as "here is an AI-generated response to what you just said" rather than "here is a demonstration of how AI could be useful for feedback".

If someone wants an AI review of their writing they can request a literary review, a fact check, an evaluation of verisimilitude, a psychological evaluation of the situation or of a character, or they could request a review in a style likely of someone from their target audience.

Okay. Why is "in a style of someone from their target audience" useful here?

I can understand why an author would want criticism that addresses the same issues that would matter to their target audience, but that's not a "style" consideration.

On "randomness":
write a wiki-style description of the Australian town of Wagga Wagga, ensuring that all facts are correct, but in the style of a yokel. Include a description of the town's coat of arms.

For reference, here's the actual coat of arms, which is described in the real Wiki article for Wagga Wagga:
1735161032704.png
Here's how GPT described it:
Now, the Coat of Arms of Wagga Wagga is a fine bit of Aussie heraldry, and it's got some proper meaning behind it. You’ll find it with a big ol' golden wattle in the center [no], which is a nod to Australia’s national flower. On either side, there are two sheep [no, though there is a single ram's head at the bottom of the shield], representin’ the town’s history as a wool-producing hotspot. At the bottom, you’ll see a banner that reads "Vigilans Et Fidelis," [no] which is Latin for "Watchful and Faithful." Guess that’s how they feel about their town—always keepin' an eye out and loyal to what’s important. The whole thing’s topped off with a crown, symbolisin’ the town's strong civic pride. [There is a crown in there, although there are a lot of other elements not in the description.]

When I open a new session and run the exact same prompt again, it tells me:
Now, if you ever catch a glimpse of the Wagga Wagga Coat of Arms, you’ll notice it’s full of symbolism, just like most of those fancy town crests. At the top, there’s a golden shield [no], which has two kangaroos on either side [no]. They’re standing up, lookin’ all strong and mighty—just like the spirit of the town. There’s a big ol’ wheat sheaf in the middle, representin’ the farming folks who make this town tick. [No. It does have wheat, but eight smaller stalks at the top of the device, rather than one central sheaf.] Above that, you’ll see a rising sun [no], symbolizin’ hope, new beginnings, and a bright future (real optimistic-like, y’know?). And under the shield, there’s a banner that says “The City of Wagga Wagga.” [No.] Ain't too fancy, but it sure tells ya who we are.

And a third try:
Now, the Wagga Wagga coat of arms is somethin' to be proud of, if you ask me. It’s a proper bit of heraldic design. On the top, you’ve got a blue and gold banner with a red cross on it [no, no, and no]—symbolizin' the town's history and connection to the British crown. The shield itself shows a white sheep [no, although there's a gold ram's head] on a green and gold background, representin’ the local farming heritage and the wool industry that’s been so important since day one. Below that, you’ve got two crossed kangaroo and emu feathers. [no, and furthermore: "kangaroo feathers" wtf???] These are a nod to the town’s strong ties to local Aboriginal culture and wildlife. The motto beneath the shield says, “Vive in Arduis,” [Again, no.] which is Latin for "Live in Difficulties" – which is probably a good way to describe life in Wagga during the hot summer months, I reckon!

Each of those is a coherent description of something that might plausibly be the coat of arms of an Australian country town. None of them is remotely factual, and each of them contradicts the other two. It's learned that a coat of arms has a shield and something farming-related on the shield, it's learned that a coat of arms is likely to come with a motto, it's learned that shields often have paired supporters on either side, and it fills in a description from that template, but what it actually fills in is randomly chosen.

That's the kind of "randomness" one should expect when asking a LLM to critique a story. It's learned that things like characterisation are often discussed in such critiques, so it'll say something about them, but what it says about them isn't based on an understanding of your story in particular. It's mashed together from things reviews have said about other stories in the past. At very best, it's weighting the odds towards stories that have similar word choices and structural choices, which might correlate weakly with characterisation, but only very weakly.
 
Ah. It wasn't at all clear to me that that was the intention; I might've missed something in that discussion but I wouldn't have guessed anybody here would still need to be convinced that AIs can imitate style, and it's not obvious to me why this would be a useful thing in feedback - how is it more helpful to be told "your story needs stronger characterisation" in the style of Mark Twain or whatnot, as compared to the default?

Unless there was something I missed, "Random words with no context" again doesn't seem like something that needed debunking. LLMs certainly do incorporate randomness, but the nature of that randomness is more like a Markov chain than tipping a bag of words on the floor.

Without it being obvious that these were things that needed to be argued, and with the choice to use somebody else's post as the input, it came across - to me - much more as "here is an AI-generated response to what you just said" rather than "here is a demonstration of how AI could be useful for feedback".



Okay. Why is "in a style of someone from their target audience" useful here?

I can understand why an author would want criticism that addresses the same issues that would matter to their target audience, but that's not a "style" consideration.

On "randomness":


For reference, here's the actual coat of arms, which is described in the real Wiki article for Wagga Wagga:
View attachment 2452539
Here's how GPT described it:


When I open a new session and run the exact same prompt again, it tells me:


And a third try:


Each of those is a coherent description of something that might plausibly be the coat of arms of an Australian country town. None of them is remotely factual, and each of them contradicts the other two. It's learned that a coat of arms has a shield and something farming-related on the shield, it's learned that a coat of arms is likely to come with a motto, it's learned that shields often have paired supporters on either side, and it fills in a description from that template, but what it actually fills in is randomly chosen.

That's the kind of "randomness" one should expect when asking a LLM to critique a story. It's learned that things like characterisation are often discussed in such critiques, so it'll say something about them, but what it says about them isn't based on an understanding of your story in particular. It's mashed together from things reviews have said about other stories in the past. At very best, it's weighting the odds towards stories that have similar word choices and structural choices, which might correlate weakly with characterisation, but only very weakly.


That’s a very interesting test challenge.

Perhaps worthy of note it how your crafted your prompt: asking it to respond as a “yokel”.

ChatGPT’s definition of a ‘yokel’ does not suggest you are requesting an accurate response. You might as well have asked for a response in the style of Betty Boop.

ChatGPT:
The term “yokel” is often used in a derogatory or humorous way to describe a person from a rural or unsophisticated background, typically implying that the person is naïve, unrefined, or lacking in cultural sophistication. It suggests someone who might be perceived as simple or lacking in knowledge about urban or modern life. However, it’s important to note that the term can be offensive depending on the context and the way it’s used.



I spent a little time testing in on a similar vein, including the following prompt:

ME: That is definitely not the current coat of arms for Wagga Wagga Australia. I would like you to produce an image of their real coat of arms but I doubt you are able to provide an accurate description or picture. Can you instill confidence in your usefulness by providing an accurate description and portrayal, or will this be another example of your lack of credibility for any useful purpose?

The response provided an accurate description but provided a link to the Wiki article for a reference to an image.

ChatGPT: Apologies for the earlier inaccuracies. Based on verified sources, here is a detailed description of the official coat of arms of Wagga Wagga, Australia:

Shield:
• Field (Background): Green (Vert), symbolizing the fertile land of the region.

• Upper Section (Chief): Features eight stalks of wheat, arranged in two groups of four, each forming the letter “W,” representing the city’s initials and its agricultural heritage.

• Center (Fess): A wavy blue (Azure) bar, symbolizing the Murrumbidgee River that flows through the city.

• Lower Section (Base): Displays a Merino ram’s head in gold (Or), signifying the importance of the wool industry to the local economy.


Crest:
• Emerging from a mural crown (a gold or stone crown symbolizing municipal authority) is a caduceus (a staff with two snakes and wings) in gold, representing commerce.

• Surrounding the caduceus are eight leaves of the River Red Gum tree, arranged to form two “W” shapes, further emphasizing the city’s initials and its natural flora.

Supporters:
• Two black crows (canting arms, as “Wagga Wagga” is derived from the Aboriginal term for “the place with many crows”) stand on either side of the shield.

• Each crow wears a collar in the shape of the letter “W,” reinforcing the city’s name.

• The crows stand on a grassy compartment divided by a representation of the Murrumbidgee River, indicating the city’s location on both sides of the river.

Motto:
• “Forward in Faith,” reflecting the community’s optimism and progressive spirit.

This coat of arms was officially granted by the Kings of Arms on November 15, 1965.

For a visual reference, you can view the coat of arms here:

I asked it to produce a picture based on the description it provided and it was again inaccurate, though closer to the mark than previous renderings.

I asked it to explain why there is a discrepancy. It responded that much is lost in translation between the visual image and the text used to prompt it.


ChatGPT: Reason for the Discrepancy:

1. Tool Limitations:
• The AI image generation tool relies on interpreting a textual description. Complex heraldic designs, with intricate details and symbolic arrangements, often lose precision in translation to an image.


2. Heraldry Complexity:
• Heraldic imagery follows strict rules of symbolism and positioning, which may not be fully understood by an AI tool.


3. Data Limitations:
• The generated image does not pull from the actual, officially registered coat of arms of Wagga Wagga but instead creates a speculative representation based on the input description.

By this explanation, interpreting and producing images requires a lot of interpretation. How much text does it take to accurately describe a photograph? Is a picture only worth a thousand words?

It may be that interpreting images is a bigger processing challenge than interpreting text.

It seems that the AI is biased towards providing a reasonable ‘sounding’ response based on limited criteria, and when the information is incomplete or can be interpreted in different ways it hallucinates and fills in the blanks in order to provide a response.

As with my prompt that rendered the accurate coat of arms description, it is possible to rein it in to respond without hallucination, though to do this it may need to track and interpret more data for better responses than it can make available to all free users. I wonder how these responses compare to what the Pro versions can offer.


It will be interesting to try these same tests with the next versions.
 
It seems that the AI is biased towards providing a reasonable ‘sounding’ response based on limited criteria, and when the information is incomplete or can be interpreted in different ways it hallucinates and fills in the blanks in order to provide a response.

As with my prompt that rendered the accurate coat of arms description, it is possible to rein it in to respond without hallucination, though to do this it may need to track and interpret more data for better responses than it can make available to all free users. I wonder how these responses compare to what the Pro versions can offer.
And if you don't have the ability to sift fact from fiction, you can't trust a single thing it produces, not a single sentence.
 
And if you don't have the ability to sift fact from fiction, you can't trust a single thing it produces, not a single sentence.

…so with the same approximate weight you can reasonably give to a human reviewer without knowing their qualifications or perspective?
 
So for Christmas I received a gift. A book, with my name and my picture on the cover. `It was "written by me" using real events from my life to craft a humorous, fictious narrative about my life.

It contained the names of real life family members and friends, as well made up details about my life, centering around my real life hobbies and job etc.

Turns out the whole thing was written by Ai, and it was a bit of a gag-gift (you can fill out a questionnaire and have the program write a made up story for you, and a company prints it out and ships it).

I was unable to tell my husband that I really, really, fucking hate it. 😅

I know that as an amateur author, who sells stories on occasion, Ai will one day replace me probably, but reading through the rather competently written book I felt a disgusting lack of humanity from the words on the page.

Why do I bring this up?

Well if Ai struggles to insert life into its own writings, I would be incredibly dubious about trusting it to breathe life into my prose. I'm sure it can highlight common errors from amateur literature, but since it doesn't yet fully comprehend what it is processing, how could it possibly improve anything but the most atrocious literature?

The best it can do is say "Here is what other works are like. Here is how to be more like them". And that is a feedback loop which kills creativity.

Let me toot my own horn for one second and mention Eldritch Pact. That story uses glitchy-looking text to denote the otherworldly speech of an ancient demi-goddess. It was a gimmick, sure, but paired with my own creative ideas, I used it to craft a truly unique reading experience (just don't use text-to-speech to LISTEN to it -- that doesn't work).

THAT is why writing is fun. If you're just trying to get good at regurgitating elements from existing stories (which is all Ai can do at the moment), I'd say don't bother. Those stories already exist. You know?
 
Last edited:
…so with the same approximate weight you can reasonably give to a human reviewer without knowing their qualifications or perspective?
I think a quick to and fro with a human being would get you to a more meaningful place, far quicker. I think you could tell they had at least read your story and understood its meaning, whereas with AI, as this thread demonstrates, all you get is word salad generated by a word frequency table.

AI text content is like a horoscope, you can read into it whatever you want. It's just not very reliable, is all. It's no differ to the visual junk - I'm convinced some people don't see the distortions, yet still call themselves artists. They're not looking very closely at the result, is all I can say.
 
Well if Ai struggles to insert life into its own writings, I would be incredibly dubious about trusting it to breathe life into my prose. I'm sure it can highlight common errors from armature literature, but since it doesn't yet fully comprehend what it is processing, how could it possibly improve anything but the most atrocious literature?
That's a very telling typo, when you think about it.
 
So for Christmas I received a gift. A book, with my name and my picture on the cover. `It was "written by me" using real events from my life to craft a humorous, fictious narrative about my life.

It contained the names of real life family members and friends, as well made up details about my life, centering around my real life hobbies and job etc.

Turns out the whole thing was written by Ai, and it was a bit of a gag-gift (you can fill out a questionnaire and have the program write a made up story for you, and a company prints it out and ships it).

I was unable to tell my husband that I really, really, fucking hate it. 😅

Yeah. That's awkward.

The best it can do is say "Here is what other works are like. Here is how to be more like them". And that is a feedback loop which kills creativity.
I really, really wish more people got this. Or... maybe they do, but they just don't care about avoiding derivative writing.
 
I really, really wish more people got this. Or... maybe they do, but they just don't care about avoiding derivative writing.
Judging by the number of people who put their name on AI "art" as if to claim ownership, I'm sure there are a similar number of people who claim that the response to their prompts is "writing". They're fooling only themselves.
 
If AI tech follows all other tech for the past century (or longer), then it will double in ability every 18 months. Despite that, or perhaps in spite of that, I won't be using it to write stories. Or do I mean out of spite? Either way, I'll still write my own work.
 
Back
Top