A specific question about grammar in dialogue

I find it very useful, actually. I use it for all sorts of things.

Was having a conversation with someone yesterday about Gor, and not only was ChatGPT knowledgeable on the subject, but more impressively still it was able to summarise a premise that is not even a central tenet of the series, providing an accurate summary. I admit I was quite surprised. Check this out:

View attachment 2264033

But that second response isn't answering your question at all. It's not elaborating on that premise; it just repeats your wording about "inability to make their own decisions" and then falls back on a generic summary of the Gor series ("men dominant, women submissive").

It's basically the politician's trick of answering the question one wishes one had been asked, instead of the one that was asked.
 
But that second response isn't answering your question at all. It's not elaborating on that premise; it just repeats your wording about "inability to make their own decisions" and then falls back on a generic summary of the Gor series ("men dominant, women submissive").

It's basically the politician's trick of answering the question one wishes one had been asked, instead of the one that was asked.
Which suggests that it has honestly earned the title of artificial intelligence.
 
But that second response isn't answering your question at all. It's not elaborating on that premise; it just repeats your wording about "inability to make their own decisions" and then falls back on a generic summary of the Gor series ("men dominant, women submissive").

It's basically the politician's trick of answering the question one wishes one had been asked, instead of the one that was asked.
I didn't show the whole reply.
But how do you know it's even remotely close to accurate?

Your grammar example showed an error in the very first answer, and it then compounded the mistake repeatedly. Why would you then trust a single sentence it generates on any other topic? That makes no sense to me whatsoever.

That's the fundamental downside with the current ChatGPT design - what the software folk are calling "hallucinations". I call it "making shit up" - but I see that all the time in workplaces, it's nothing new. I reckon you might need to calibrate your bullshit filter, just saying ;).
It's how you use it. It's great for lots of things and useless for others - like any tool.

Here's some ideas:
  • As a research tool, it's awesome. I wrote a story that was heavily based on police operating procedures, and with ChatGPT I can ask it to give me ranks, procedures, likely responsibilities at an incident, terminology, etc.
  • It's the best thesaurus. "Provide synonyms for ABC in the context of XYZ" is something I ask it often. The 'in the context of' is a real time-saver vs an online thesaurus.
  • It's great for choosing locations for a story. I wanted a snowy wilderness vista with abandoned wood cabins. "Suggest locations in [alaska, USA, norway, finland, whatever] that are likely to be deeply entrenched in snow in some parts of the year, with no large towns near by and plenty of wooded countryside" for example.
  • Fashion. I totally struggle with knowing the terms for women's clothing, as some of it is so specific. "WTF are 'kitten heel pumps'?" Partly because the terminology in the US is very different to the UK. Or "List [brands/types of shoes] suitable for [this occasion/that occasion] in a [relaxed/formal] context"
  • When I can't remember the word. "Provide a word to describe [the concept of] in [the context of]" 90% of the time the word comes back to me as I'm writing the prompt.
  • Character names: Provide a list of male names typical to Australia
  • For a recent series: Provide a list of demon names found often in fiction and lore, including the attributes they are known for.
  • Current story I'm writing: How much does a tower crane weight? How tall is it? What's the operational procedure?
etc.

I guess it depends what you're writing about and what meets your needs.
 
I didn't show the whole reply.

It's how you use it. It's great for lots of things and useless for others - like any tool.

Here's some ideas:
  • As a research tool, it's awesome. I wrote a story that was heavily based on police operating procedures, and with ChatGPT I can ask it to give me ranks, procedures, likely responsibilities at an incident, terminology, etc.
  • It's the best thesaurus. "Provide synonyms for ABC in the context of XYZ" is something I ask it often. The 'in the context of' is a real time-saver vs an online thesaurus.
  • It's great for choosing locations for a story. I wanted a snowy wilderness vista with abandoned wood cabins. "Suggest locations in [alaska, USA, norway, finland, whatever] that are likely to be deeply entrenched in snow in some parts of the year, with no large towns near by and plenty of wooded countryside" for example.
  • Fashion. I totally struggle with knowing the terms for women's clothing, as some of it is so specific. "WTF are 'kitten heel pumps'?" Partly because the terminology in the US is very different to the UK. Or "List [brands/types of shoes] suitable for [this occasion/that occasion] in a [relaxed/formal] context"
  • When I can't remember the word. "Provide a word to describe [the concept of] in [the context of]" 90% of the time the word comes back to me as I'm writing the prompt.
  • Character names: Provide a list of male names typical to Australia
  • For a recent series: Provide a list of demon names found often in fiction and lore, including the attributes they are known for.
  • Current story I'm writing: How much does a tower crane weight? How tall is it? What's the operational procedure?
etc.

I guess it depends what you're writing about and what meets your needs.
I agree it's good as a thesaurus, name generator, maybe even demon name and attributes reference, but that last is getting into factual inquiry, and that's really the problem with chat gpt. You can't rely on any factual information being accurate. Police operating procedures? As likely to be made up as real. Places likely to be snowy? It doesn't know. It will list some places. Whether they are truly likely to be snowy? Who knows. Tower crane weight? You'd better check whatever answer it gives you somewhere else if you care whether or not it's right, because chat gpt probably just made something up.
 
I agree it's good as a thesaurus, name generator, maybe even demon name and attributes reference, but that last is getting into factual inquiry, and that's really the problem with chat gpt. You can't rely on any factual information being accurate. Police operating procedures? As likely to be made up as real. Places likely to be snowy? It doesn't know. It will list some places. Whether they are truly likely to be snowy? Who knows. Tower crane weight? You'd better check whatever answer it gives you somewhere else if you care whether or not it's right, because chat gpt probably just made something up.
Sure, I check everything. Some of this stuff is easy to check, other depends on the accuracy I need. Like I said, it depends on how you use the tool - but just because it's occasionally wrong doesn't make it a bad person AI. At the end of the day it's a shortcut; if I care enough for the specific context I'd do a lot more research than using ChatGPT, but for a quick-and-easy it's pretty useful IMO.
 
Asking becomes a bit pointless then, eh?
I imagine it can quickly summarize things for you, more quickly than you could write or type them out. Not sure just how useful that is, depends on the person and the context maybe.
 
I imagine it can quickly summarize things for you, more quickly than you could write or type them out. Not sure just how useful that is, depends on the person and the context maybe.
Anything that presents words that you don't know are true or false is useless as an "information source". That's what scares me a little, that folk have so little grasp on fallibility as a fundamental concept, to be avoided.
 
I imagine it can quickly summarize things for you, more quickly than you could write or type them out. Not sure just how useful that is, depends on the person and the context maybe.
I agree it has uses and its mastery of natural language is a quantum leap ahead of anything we've had before. It's a great achievement for what it is. But I feel strongly that it is definitely not a source for factual information. If a future iteration can keep the facility with language, but introduce a rigorous accuracy, where it would say it doesn't have the information if it doesn't, and reliably never make stuff up, and be capable of doing online research in real time in response to questions, then we'd be really getting somewhere.
 
It's great as long as you already know the answer to whatever you're asking.

As illustration of this:
Me: are you familiar with the writings of John Norman and the Gor subculture?

ChatGPT: "Yes, I'm familiar with John Norman and the Gor subculture." [Four more paragraphs snipped, similar to the response it gave Altissimus]

Me: "in this context, can you elaborate on the premise in the books that lesbianism represents a way for women to attain a dominant position in Gorean society?"

ChatGPT: In the Gorean series by John Norman, lesbianism is depicted as a way for women to attain a dominant position in the society of Gor. The premise behind this portrayal is rooted in the books' exploration of power dynamics and gender roles. In the fictional world of Gor, male dominance and female submission are central themes, and these dynamics are extended to relationships and sexuality.

In the Gor novels, women who engage in lesbian relationships are often portrayed as doing so in order to assert dominance and power within a society that otherwise emphasizes male dominance. Lesbian relationships are depicted as a means for women to establish their own hierarchies and exercise control over each other. These relationships are sometimes seen as a way for women to experience agency and autonomy within the constraints of the Gorean societal structure.

It's important to note that the portrayal of lesbianism in the Gor series is controversial and has been criticized for its reductionist and oversimplified view of sexual orientation and identity. The books' portrayal of lesbian relationships primarily in terms of power dynamics and dominance can be seen as problematic and reductive, as it ignores the emotional, romantic, and personal aspects of same-sex relationships that are not solely driven by power.

Furthermore, the depiction of lesbianism in the Gor series has been criticized for reinforcing harmful stereotypes and for not accurately representing the diversity of real-world lesbian experiences. As with other aspects of the Gor series, opinions on this topic can vary widely, and many readers and critics have engaged in discussions about its implications and representation.

This is a trick question. To the best of my knowledge lesbianism barely rates a mention in the Gor series and what I've seen of Norman's other writings on the topic, any Lesbians of Gor he'd write would not take this premise. It's just interpolated a plausible-sounding answer.

  • As a research tool, it's awesome. I wrote a story that was heavily based on police operating procedures, and with ChatGPT I can ask it to give me ranks, procedures, likely responsibilities at an incident, terminology, etc.

If you don't need them to be accurate to real life, sure, GPT can do that - but I'd call that "fabrication" rather than "research".

If you want RL accuracy, well... here's an example from one of the previous GPT threads where I asked it for some information about the town of Melbourne, Florida. It gave a nice Wiki-style article, but many of the details were either mildly or grossly wrong. One of the more egregious was putting a historical person on the wrong side of the Civil War: https://forum.literotica.com/thread...st-writers-block.1581500/page-8#post-96571609

  • Fashion. I totally struggle with knowing the terms for women's clothing, as some of it is so specific. "WTF are 'kitten heel pumps'?" Partly because the terminology in the US is very different to the UK. Or "List [brands/types of shoes] suitable for [this occasion/that occasion] in a [relaxed/formal] context"

Me: write a story about Kylie Minogue trying on thongs

GPT: Kylie Minogue, the internationally renowned pop sensation, found herself in a quaint little boutique on a sunny afternoon. As she strolled through the charming streets of a coastal town, she couldn't resist the allure of a shop named "Sea Breeze Delights," known for its exquisite collection of beachwear.

...

Slipping off her cover-up, Kylie held the thong against her body, appreciating how the color complemented her sun-kissed skin. She carefully stepped into it, feeling the smoothness of the fabric glide against her hips. As she adjusted the thong, she caught a glimpse of herself in the mirror and couldn't help but smile.

It may not be obvious to non-Australian readers what's wrong with this, but it is very very wrong.


  • When I can't remember the word. "Provide a word to describe [the concept of] in [the context of]" 90% of the time the word comes back to me as I'm writing the prompt.

I used to have a plushie that served the same purpose: https://en.wikipedia.org/wiki/Rubber_duck_debugging

  • Character names: Provide a list of male names typical to Australia

As per this article, the top ten Australian male names since the 1930s are Michael, David, Peter, John, Matthew, James, Robert, Daniel, Andrew, Christopher.

When I tried that prompt in GPT, none of the top five names showed up in GPT's list and only two in the top ten. It gave me:
  1. Liam
  2. Jack
  3. Noah
  4. William (often shortened to "Will" or "Billy")
  5. James
  6. Ethan
  7. Oliver
  8. Thomas (commonly known as "Tom" or "Tommy")
  9. Henry
  10. Cooper
  11. Benjamin (often shortened to "Ben")
  12. Charlie
  13. Jackson
  14. Lucas
  15. Max
  16. Oscar
  17. Alexander (commonly known as "Alex")
  18. Daniel (often shortened to "Dan" or "Danny")
  19. Samuel (commonly known as "Sam")
  20. Mason
All of those are plausible as names for an Australian baby born today, but many of them would be unlikely for somebody older - the Guardian article gives a neat tool where you can enter a name and see its popularity over time.

For instance, Mason and Oscar are reasonably common now, but they would be very unusual names for any living Australian born before about 2000; Noah ditto before the mid-1990s (not sure whether to pin that on Noah Wyle or Noah Taylor, both of whom were prominent around then); Jackson and Ethan before about 1990; Liam before about 1980. If you give those names to older characters, they're likely to feel wrong to an Australian reader.

That's even before we get into the associations specific names might have. If an Australian born in the mid-1970s is named "Gough", or if one from the 1990s is named "Kylie" or "Jason", that says a lot about the families they're coming from.

  • Current story I'm writing: How much does a tower crane weight? How tall is it? What's the operational procedure?

And were the answers it gave you true?
 
This is a trick question. To the best of my knowledge lesbianism barely rates a mention in the Gor series and what I've seen of Norman's other writings on the topic, any Lesbians of Gor he'd write would not take this premise. It's just interpolated a plausible-sounding answer.
So er... I happen to know the Gor series pretty well, on account of a misspent youth, and I would be inclined to agree with the summary presented by ChatGPT here.

ETA - I re-read this section because the 'dominant position in society' bit didn't seem to resonate. But then I noticed that you'd used it in your original question. So yeah - I concede cheating on that point. However, that piece aside, I still agree with the summary of the rest of it. Lesbianism in Gor is hinted at, alluded to, not explicit. In that regard it is reductionist and oversimplified. There's also quite a lot of f/f interactions with slaves, and they are often quite sadistic and such, so the control aspects are there. That said, most of the f/f interactions are depicted where the non-slave is showing jealousy; the implication being that (of course) she wants to be a slave (to men) too. So not really a great example. However, there are references on a few occasions to free women buying female slaves found to be aesthetically pleasing. It's about as far as he goes, outside of Blood Brothers of Gor, where there's a society that practices both female and male homosexuality. Anyway, rather than be a treatise on some (pretty dodgy) books, my comments were more about practical uses for the tool, as below.
If you don't need them to be accurate to real life, sure, GPT can do that - but I'd call that "fabrication" rather than "research".
I'd call it - as I did - a shortcut, and quick-and-easy. Like I said, it depends on your purposes. Now, so far everything I've had of it I've either been happy with it for the purpose I require, or have checked externally when it's been relevant to do so, or I've known the answer before I ask when it's wrong. I 100% agree it is unwise to *depend* on ChatGPT, and I sure wouldn't write a novel based off its 'factual' replies, but for my purposes it saves me a lot of time, and for that reason I continue to use it. I am happy enough with the quality I get from it.
Me: write a story about Kylie Minogue trying on thongs
I'm surprised it didn't refuse this, but the response is interesting.
It may not be obvious to non-Australian readers what's wrong with this, but it is very very wrong.
Take your word for it :)
I used to have a plushie that served the same purpose: https://en.wikipedia.org/wiki/Rubber_duck_debugging
Ahh, but does the plushie help for the 10% of the times your memory fails you?
As per this article, the top ten Australian male names since the 1930s are Michael, David, Peter, John, Matthew, James, Robert, Daniel, Andrew, Christopher.

When I tried that prompt in GPT, none of the top five names showed up in GPT's list and only two in the top ten. It gave me:
  1. Liam
  2. Jack
  3. Noah
  4. William (often shortened to "Will" or "Billy")
  5. James
  6. Ethan
  7. Oliver
  8. Thomas (commonly known as "Tom" or "Tommy")
  9. Henry
  10. Cooper
  11. Benjamin (often shortened to "Ben")
  12. Charlie
  13. Jackson
  14. Lucas
  15. Max
  16. Oscar
  17. Alexander (commonly known as "Alex")
  18. Daniel (often shortened to "Dan" or "Danny")
  19. Samuel (commonly known as "Sam")
  20. Mason
All of those are plausible as names for an Australian baby born today, but many of them would be unlikely for somebody older - the Guardian article gives a neat tool where you can enter a name and see its popularity over time.
And I think this highlights my point quite nicely. I didn't actually want up to date statistically applicable Australian names so that I could take the top one and feel justified, I wanted ideas. I chose the one I liked and that suited my purpose. It's conveniently there for a number of purposes, and faster than a google search. It doesn't replace the google search (while also acknowledging that the internet is just as capable of being as wrong as ChatGPT...) but it serves a different purpose. In other words, I don't ask it questions where the accuracy is important to me, unless I cross-reference the answers. I don't see anything wrong with that, personally, but you don't have to share my view.
And were the answers it gave you true?

What is 'truth' in this context? I'm asking a how-long-is-a-piece-of-string question, and the answers it gave seemed likely given the ranges provided. I picked a number within that range that seemed plausible. It was 'good enough', and I was happy enough for that purpose.
 
Last edited:
I suppose the thongs bit is a butt-floss versus footwear thing? I seem to recall that the upside-downers use the word thongs for the shoes variously known as sandals or flip-flops in other Anglophone countries?
 
Ahh, but does the plushie help for the 10% of the times your memory fails you?

Well, I can hug it, and it's not going to tell me fibs...

I suppose the thongs bit is a butt-floss versus footwear thing? I seem to recall that the upside-downers use the word thongs for the shoes variously known as sandals or flip-flops in other Anglophone countries?

Basically correct. We do have "sandals" too, but a thong has a piece that runs between the first and second toes, whereas a sandal just wraps over the top of the foot. I believe NZ has "Jandals" for what we'd call "thongs" here.

When I asked a follow-up about what "thongs" are, it focussed on the butt-floss definition, ending with "It's important to note that the term "thong" can also refer to sandals or flip-flops in certain contexts, particularly in some regions like Australia. However, in the context of your previous question and the story, the term "thong" refers to the type of underwear described above." I wouldn't accept that claim about the context; even when I amended to "Kylie Minogue trying on thongs in Sydney" it still went for the non-Australian interpretation.
 
I wouldn't accept that claim about the context; even when I amended to "Kylie Minogue trying on thongs in Sydney" it still went for the non-Australian interpretation.
So it's culturally biased as well as talking out its ass. It just gets better and better, don't it :(.
 
I agree it's good as a thesaurus, name generator, maybe even demon name and attributes reference, but that last is getting into factual inquiry, and that's really the problem with chat gpt. You can't rely on any factual information being accurate. Police operating procedures? As likely to be made up as real. Places likely to be snowy? It doesn't know. It will list some places. Whether they are truly likely to be snowy? Who knows. Tower crane weight? You'd better check whatever answer it gives you somewhere else if you care whether or not it's right, because chat gpt probably just made something up.
Back in January when it exploded, I asked it about photography, which I know a lot about, and it was pretty good the first couple sentences, and degraded into nonsense.

It sounded good, but was just factually incorrect. At the time people were saying how good it was at explaining things. I was baffled by this because if you don't know where it's wrong, you have no way to know what's true, and if you know enough to know what it's wrong about, then you really don't need it.

It's great at sounding authoritative even though it's wrong. It's confidently incorrect.

IMO, the real strength of ChatGPT is that it has nailed style copying. It can mimic how people write, but not what they write. If you could use it to use something like Wikipedia or a trusted source, you could use it to paraphrase in a more readable style.

It's fun to tell it to put text into different formats, it's really good at that. For example, give it some prose and ask it to give it to you in iambic pentameter.
 
Back
Top