Tried ChatGPT for feedback?

AlexBailey

Kinky Tomgurl
Joined
Sep 12, 2019
Posts
8,805
Give it a try.👍

The free version of ChatGPT that I use will digest something around 2k words in a single prompt.

I tried many different approaches and was able to get some of the analytical responses I’ve been so wanting.

Here’s an example of the feedback:

(Alex) Does the writing in the story excerpts I’ve provided in this thread seem to be more of a novice writer or an experienced author?

(ChatGPT)
Based on the story excerpts you have provided in this thread, it is difficult to definitively determine whether the writing belongs to a novice writer or an experienced author. The excerpts showcase a range of storytelling elements, including character development, dialogue, and emotional depth. The writing demonstrates an ability to convey emotions, describe scenes, and create engaging interactions between characters.

However, it’s important to note that a few stylistic and technical improvements could enhance the overall quality of the writing. For example, there are occasional instances where the pacing and transitions between scenes could be smoother. Additionally, some sentences could benefit from more precise word choices or sentence structures to enhance clarity and flow.

Ultimately, it is possible that the excerpts could be the work of a novice writer who is developing their skills and refining their craft. Alternatively, they could also be the work of an experienced author who is in the process of revising and polishing their writing. Without further information, it is challenging to make a definitive assessment about the writer’s level of experience.
 
The free version of ChatGPT won’t review the sex scenes or give you advice on how to write a threesome but it will respond to requests for analysis, just leave out the rutting.

Be thoughtful about your questions. I was able to get it to respond to questions referencing many previous prompts, so even though the amount of text I could put in one prompt is limited I was able to get it to “stitch” large parts together.

Try asking something like:

Provide a literary critique of the following excerpt from a larger story:

(Paste your excerpt)


Another one that was fun is:

“Provide a psychological analysis of the interaction between the characters in the following excerpt from the same story referenced in the previous prompt:”

(Enter story excerpt)
 
Last edited:
I’d far prefer human interaction and feedback but the AI is instant and has endless patience . ;)
 
It is an interesting idea. What did you think about the feedback you pasted? I mean, it could have been just some generic mash-up of some usual human feedback. Or did you find it insightful?
 
I don't think it's actually reviewing your text at all, just providing the sort of words a review would feature. Remember that there's no actual thought behind the tool. I'm pretty sure it can't actually analyse what you're writing, just mimic a human who can, which means it isn't really addressing your piece. In other words, IMO its just regurgitating generic 'review' content rather than addressing anything specific to your writing.

Like, look at that excerpt and tell me where it says anything specific. The whole thing is just waffle and weasel-words and it provides no actual examples of what it's talking about, nor is it comitting to a position. "You might be a beginner or you might be experienced and here's some vague and very common criticisms without examples, I dunno anything really". Useless.

It's just insincere word soup. Don't make the mistake of thinking these things are smarter or more flexible than they actually are.
 
Which is why it doesn't belong here in any form. This is supposed to be a place for original creative works, not regurgitated nonsense.

I agree about making content for stories, that’s not what I asked it for.
;)
 
It should not be allowed here in ANY form.

It should not be allowed anywhere in any form.

It should be outlawed.
 
I don't think it's actually reviewing your text at all, just providing the sort of words a review would feature. Remember that there's no actual thought behind the tool. I'm pretty sure it can't actually analyse what you're writing, just mimic a human who can, which means it isn't really addressing your piece. In other words, IMO its just regurgitating generic 'review' content rather than addressing anything specific to your writing.

Like, look at that excerpt and tell me where it says anything specific. The whole thing is just waffle and weasel-words and it provides no actual examples of what it's talking about, nor is it comitting to a position. "You might be a beginner or you might be experienced and here's some vague and very common criticisms without examples, I dunno anything really". Useless.

It's just insincere word soup. Don't make the mistake of thinking these things are smarter or more flexible than they actually are.

It is able to provide specific examples if that’s what you ask for.

I didn’t post any of the responses that were specific to my excerpts. What it was useful for was seeing if I conveyed thoughts clearly.

It’s a tool, not a person. Do you find any value in using a word processor or any sort of grammar check? This is a tool that can serve many different purposes.
 
Last edited:
It should not be allowed here in ANY form.

It should not be allowed anywhere in any form.

It should be outlawed.

Lol. Yeah, no. The cat is out of the bag.

Do you know that some labor unions banned the use of tape measures and battery powered drills on construction job sites when they were new? Supposedly they were banned because they were too helpful and reduced the amount of hours a worker could bill for - yet what they really accomplished was allowing workers to be more productive.
 
I am sure that Chat GPT and others are being used right now by professionals as a proofing tool.

The more AI learns, the more capable it will be. (Til it launches all those Nukes we still haven't disarmed, and starts building Terminators...)

AI of varying degrees is going to take over how we live our lives; Luddites may be (rightly?) scared of this, but the genie is well and truly out of the bottle now; there's no way to put it back in. Welcome to your new world.
 
It is an interesting idea. What did you think about the feedback you pasted? I mean, it could have been just some generic mash-up of some usual human feedback. Or did you find it insightful?

The feedback I posted previously was probably the least specific response it gave. I shared it because it was short and not very specific, most of the replies were much more in depth, in fact you can prompt it to get into more depth about the same subject as a previous response.

Really, try it.

Try something like:

“Provide a psychological analysis of the the character “Charlie” from the excerpt below:

(Enter your excerpt)”



The useful thing to me is that the response gave feedback on how I had presented my character. Did my ideas come across well?



Check out this prompt and response that is more typical in length:

(Alex)
Provide a literary critique on the quality of the writing in the parts of the story of Alex and Sam that I provided in previous posts today.



(ChatGPT)

The excerpts from the story of Alex and Sam demonstrate several notable qualities in terms of writing.

1. Vivid Imagery: The writing effectively creates vivid and detailed imagery, allowing readers to visualize the characters and their surroundings. Descriptions such as “They’re sitting near the far end of the stage in the lower tier of bleachers” and “His cheeky grin makes his face even more handsome” bring the scenes to life and help readers form a clear mental picture.
2. Authentic Dialogue: The dialogue in the story feels natural and authentic, capturing the unique voices and personalities of the characters. The conversations flow smoothly, with distinct speech patterns and colloquial expressions that add realism to the interactions. For example, Jena’s comment, “You sound just like Kayla,” and Sam’s response, “Ah, you were stuck with my little sis? Taught you some proper ’Stralian lingo now did she?” showcase individual character traits through dialogue.
3. Emotional Depth: The writing effectively conveys the emotional depth of the characters, bringing their inner thoughts, struggles, and desires to the forefront. The inner monologues and reactions provide insight into their fears, hopes, and vulnerabilities. This allows readers to connect with the characters on an emotional level and understand the complexities of their experiences.
4. Engaging Pacing: The pacing of the story is well-managed, providing a balance between moments of introspection and action. The narrative unfolds at a pace that keeps the reader engaged and invested in the progression of the characters’ relationships. The use of shorter, impactful sentences during moments of heightened emotion or tension helps maintain a sense of momentum.
5. Evocative Language: The language used in the story is evocative and descriptive, enhancing the reader’s experience. Words and phrases such as “nervously climb the steps,” “breath-catching stillness,” and “melting in his arms” evoke specific emotions and imagery, heightening the impact of the scenes.

Overall, the quality of the writing in the provided excerpts is commendable. The story effectively captures the nuances of the characters’ emotions, employs vivid imagery to paint a clear picture, and utilizes authentic dialogue to bring the characters to life. The engaging pacing and evocative language further contribute to the overall effectiveness of the storytelling.
 
I'm starting to think a few people around here also believe the earth is flat. AI technology is already everywhere, it's integrated in your phone, your Windows OS, even the spellchecker in your word processor. Unless you are still using a typewriter, most recent stories will (knowingly or unknowingly) include some form of AI assistance. It's not going away, and thinking it will, or trying to ban everything, is honestly incredibly naïve.

That doesn't mean this technology doesn't require more regulation, it's obviously one of the biggest challenges in the near future. It's also a learning process how we should all use these technologies in a positive way, in an ethical way, in a creative way. If you are an artist and just ask an AI to make a piece of art for you, then you are both lazy and dumb. But if you use AI as an assistant during the creative process, then I don't see the issue, you still remain the director of your own work.

As an example, there is an AI tool called Sudowrite, which I've used before. You can ask it to write you an entire story, and the end product would be poor compared to a human author. But you can also ask this tool to brainstorm some ideas how to continue your plot. Or you can also highlight a certain sentence or paragraph and then ask it to suggest improvements, like making it more descriptive, or to use all 5 senses, or to make the writing more intense. Used in the right way, this can help you improve your writing, while you remain in control.
 
It is able to provide specific examples if that’s what you ask for.

I didn’t post any of the responses that were specific my excerpts. What it was useful for was seeing if I conveyed thoughts clearly.

It’s a tool, not a person. Do you find any value in using a word processor or any sort of grammar check? This is a tool that can serve many different purposes.
This just makes me think of the lawyer who used it for a case - it gave him lawsuits that didn’t exist and assured him they were real when he questioned it. ‘Hallucinations’ and lies are integral to the machine; that guy’s legal career is probably over now. I don’t trust the thing to say anything meaningful, not in its current state. It can identify a sentence and put together a response based on predictions of what a human reviewer might say, but so what? I could already feed my writing into analytic software several years ago; I don’t want some waffling liar of a chatbot to fluff me up on top of that.

You do you, but I don’t like the thing. If I sound a bit hostile here it’s because the damn machines were trained on stolen writing by abused workers and now the 1% want to use them to cut creatives out of their jobs and deny them a fair wage. I’m sure GPT is a fun toy, but in a just world the people plotting to replace real writing with it will get a copyright and attribution hammer dropped straight on their foreheads.

(Sadly, I am not convinced we live in a just world)

Lol. Yeah, no. The cat is out of the bag.

Do you know that some labor unions banned the use of tape measures and battery powered drills on construction job sites when they were new? Supposedly they were banned because they were too helpful and reduced the amount of hours a worker could bill for - yet what they really accomplished was allowing workers to be more productive.

On the other hand, fusion power has been twenty years away for the last fifty years. And what happened to those self-driving cars we were meant to have by now? Technology advances, yes, but people use it in useless ways and hype often fizzles out. I certainly don’t think it’s going to firing off the nukes any time soon, as an above post put it.
 
This just makes me think of the lawyer who used it for a case - it gave him lawsuits that didn’t exist and assured him they were real when he questioned it.
I think there's still a possibilty of sanctions there.
 
In reply to the OP, I believe ChatGPT can already do remarkably well in analyzing texts. And unlike what some may believe, it's clear that it does understand a text and can provide meaningful feedback, not some generically generated reviews. It's definitely still limited, but improving rapidly. I'm sure if you tried GPT-4 (paid version only), you'd get an even better response. The main restraint that is still holding these systems back is a limited context window, so they start to forget previous points from a larger conversation, or cannot handle very large texts. But this will soon change, e.g. Microsoft recently released a detailed research paper about a new AI with a 2 billion token limit. That is a exponential improvement ... on a logaritmic scale.
 
Eh, I’m sure there’s use cases for the technology. But it needs to handled safely and carefully, for much the same reasons we don’t let drunken children drive cars without seatbelts.
 
This just makes me think of the lawyer who used it for a case - it gave him lawsuits that didn’t exist and assured him they were real when he questioned it. ‘Hallucinations’ and lies are integral to the machine; that guy’s legal career is probably over now. I don’t trust the thing to say anything meaningful, not in its current state. It can identify a sentence and put together a response based on predictions of what a human reviewer might say, but so what? I could already feed my writing into analytic software several years ago; I don’t want some waffling liar of a chatbot to fluff me up on top of that.

You do you, but I don’t like the thing. If I sound a bit hostile here it’s because the damn machines were trained on stolen writing by abused workers and now the 1% want to use them to cut creatives out of their jobs and deny them a fair wage. I’m sure GPT is a fun toy, but in a just world the people plotting to replace real writing with it will get a copyright and attribution hammer dropped straight on their foreheads.

(Sadly, I am not convinced we live in a just world)



On the other hand, fusion power has been twenty years away for the last fifty years. And what happened to those self-driving cars we were meant to have by now? Technology advances, yes, but people use it in useless ways and hype often fizzles out. I certainly don’t think it’s going to firing off the nukes any time soon, as an above post put it.

You make some good points, but the quality and depth of the responses are based on your questions and the material it has been provided with.

The attorney who asked about cases could have limited the scope of review material, possibly specifying that it use only real cases from within certain jurisdictions.

As a test I once asked ChatGPT about advantages in designs of different kayaks, specifically what design features would be better for surfing waves in. It described various advantages to different hull designs but it made no mention of using a spray skirt.

If I’d simply taken its raw advice without further explanation my kayak would have swamped with the first wave.

Adding “what safety equipment and precautions should be considered…” produced more important information.

You need to be thoughtful in how you use it. ;)
 
You make some good points, but the quality and depth of the responses are based on your questions and the material it has been provided with.

The attorney who asked about cases could have limited the scope of review material, possibly specifying that it use only real cases from within certain jurisdictions.

As a test I once asked ChatGPT about advantages in designs of different kayaks, specifically what design features would be better for surfing waves in. It described various advantages to different hull designs but it made no mention of using a spray skirt.

If I’d simply taken its raw advice without further explanation my kayak would have swamped with the first wave.

Adding “what safety equipment and precautions should be considered…” produced more important information.

You need to be thoughtful in how you use it. ;)
Myself, I personally wouldn’t use it for research at all. The hallucination risk is high enough that I’d have to double check any info it gave me; It’s noted for being able to lie confidently so I’d be uneasy about stuff I didn’t personally know how to verify (how would you know what to define and pare down if you didn’t know the topic?), whereas if I could confidently check something myself, I wouldn’t need the bot in the first place. I’d rather read an article instead of relying on something that’s going to remix said article in its own words.

As I once read somewhere, if it just spat a dictionary definition or Wikipedia summary at you in response to a question like a good search engine would*, that would in many ways be more useful and efficient than the conversational way it actually speaks. It’s the inefficiency itself that we find impressive.

*Not that there are many good search engines around any more, mind. Hah.
 
Last edited:
Myself, I personally wouldn’t use it for research at all. The hallucination risk is high enough that I’d have to double check any info it gave me; It’s noted for being able to lie confidently so I’d be uneasy about stuff I didn’t personally know how to verify (how would you know what to define and pare down if you didn’t know the topic?), whereas if I could confidently check something myself, I wouldn’t need the bot in the first place. I’d rather read an article instead of relying on something that’s going to remix said article in its own words.

As I once read somewhere, if it just spat a dictionary definition or Wikipedia summary at you in response to a question like a good search engine would*, that would in many ways be more useful and efficient than the conversational way it actually speaks. It’s the inefficiency itself that we find impressive.

*Not that there are many good search engines around any more, mind. Hah.
That is certainly true for vanilla ChatGPT, since it's 'knowledge' is part of it's training. I believe hallucinations are even inherent to the design, since it's built to occasionally deviate from the statistically best answer (to add some randomness/creativity/originality to it's output). But once you give it access to external sources, it becomes possible to effectively implement source checking into a research process. That's already starting to happen with plugins, there are some good ones available already specifically for research: ScholarAI, NextPaper.AI, Litmaps, Science and the awesome Wolfram plugin for computations. I wouldn't say it's already 100% reliable, but I think it will be at one point.
 
Note: I’m using the free version with my preferred settings for statistical relevance.


(Alex)
Can ChatGPT reliably be used to fact check itself?

(ChatGPT)
While ChatGPT can be used to fact-check certain claims by searching for information and providing insights, it’s important to note that it relies on the data it was trained on, which goes up until September 2021. Consequently, it may not have access to the most recent information or be aware of developments that have occurred after its knowledge cutoff.

In some cases, ChatGPT can help verify factual information by providing context or pointing towards reliable sources. However, for critical or time-sensitive matters, it’s advisable to consult up-to-date and authoritative sources or fact-checking organizations to ensure the accuracy of the information.
 
I read in the Forums that Lit uses a bot to screen the stories. Isn't that a use of AI? If so, then Lit makes use of them. However, in reply to my attempt to use AI-generated graphics, Lit rejected the story based on the graphics needing to be human-generated due to potential copyright issues. When copyright is resolved satisfactorily, it could write all the porn published here – perhaps better than we do. Even as I type, some program is checking my spelling and corrects it or offers to complete it by using type-ahead suggestions, though not with all the heart we pour into it. However, a recent TV AI interview suggested that the latest versions create expressive and human-level responses.

I tried Chat gpt yesterday to ask it to help find Excel formulas. I wanted to figure out how to reverse engineer the averages Lit uses so that I had some idea of how my stories were scored. [I made up my own trial-and-error system a couple of years ago.] After ten tries of copying the formulas into Excel and getting error messages, I gave up. None of the AI's formulas worked. So, it's not ready for me. As to stories, I've yet to go there for help, but this thread piques my curiosity.

Regarding its legitimacy, that is a moot point, or being banned on the Lit as well. It seems it is already embedded. How do you police something that is nearly undetectable? Develop an 'honor system' to control it? Doesn't work for anything else - just an observation.
 
Back
Top