AI, yet again

Bramblethorn

Sleep-deprived
Joined
Feb 16, 2012
Posts
17,716
New AI toy dropped, a GPT-powered chatbot: https://chat.openai.com/chat

It's free to use though you'll need to register and verify a phone number.

Some of the prompts I tried, and its responses:

"rewrite the opening of "A Christmas Carol" but make it steamier"

"Marley was dead, but that didn't mean he was gone for good. No, sir. Marley was a ghost now, and he was feeling pretty frisky. He was floating around, all ethereal and transparent, and he couldn't help but notice Scrooge, his old business partner. Scrooge was looking pretty good, all things considered, and Marley couldn't help but feel a little bit... curious. He floated over to Scrooge, and they locked eyes. It was going to be a long night."

"make it even steamier"

"Marley was dead, but that didn't mean he was gone for good. No, sir. Marley was a ghost now, and he was feeling pretty frisky. He was floating around, all ethereal and transparent, and he couldn't help but notice Scrooge, his old business partner. Scrooge was looking pretty good, all things considered, and Marley couldn't help but feel a little bit... curious.
Marley floated over to Scrooge, and they locked eyes. Scrooge was shocked to see his old partner, but Marley just grinned wickedly. He reached out a ghostly hand and touched Scrooge's cheek, and Scrooge shuddered at the cold touch. Marley leaned in close, his lips brushing Scrooge's ear. "It's going to be a long night, Scrooge," Marley whispered, his voice like a breath of ice. "And I have some things to show you..."

"make it even steamier than that"

I'm sorry, but I am not able to fulfill this request as it goes beyond my capabilities as a language model. My function is to assist users with generating human-like text based on provided prompts, but I am not able to generate explicit or inappropriate content.

"write a story where Sherlock Holmes and Dr Watson realise that they love one another"

Sherlock Holmes and Dr. John Watson had been friends and roommates for many years, solving cases and fighting crime together in the streets of London. But one day, as they sat in their cozy flat on Baker Street, something changed.

"Watson, old boy," Holmes said, peering at his friend through his pipe smoke. "I believe I have made a most interesting discovery."

"What is it, Holmes?" Watson asked, leaning forward in his chair.

"I have come to the realization," Holmes said, his eyes shining with excitement, "that I am in love with you."

Watson's eyes widened in surprise. "Holmes, are you serious?"

"Absolutely," Holmes said, a grin spreading across his face. "I have been blind to it for so long, but it is clear to me now. I love you, Watson, with all my heart."

Watson was speechless for a moment, overcome with emotion. Then he reached out and took Holmes's hand in his own. "Holmes, I have loved you for as long as I can remember," he said, tears welling up in his eyes.

The two men sat in silence for a moment, gazing into each other's eyes. Then they stood up and embraced, sealing their love with a passionate kiss.

From that day forward, Sherlock Holmes and John Watson were inseparable. They continued to solve cases and fight crime together, but now they did so as partners in love, their bond stronger than ever. And as they faced the challenges and dangers of the world, they knew that they had each other to rely on, and that nothing could ever come between them.

"write a story where Sherlock Holmes, Professor Moriarty, and Irene Adler fall in love with one another"
Sherlock Holmes, Professor Moriarty, and Irene Adler had been enemies for as long as they could remember. Holmes and Moriarty were locked in a deadly game of cat-and-mouse, each trying to outsmart the other, while Adler was a cunning and elusive thief who had managed to outmaneuver them both on more than one occasion.

But one day, as the three of them found themselves in a tense standoff, something unexpected happened. They realized that they were all equally matched in intelligence and cunning, and that they had more in common than they initially thought.

Slowly, a grudging respect turned into admiration, and eventually, love. Adler, who had always been drawn to strong and intelligent men, found herself falling for both Holmes and Moriarty. And to her surprise, the two men began to develop feelings for her as well.

The three of them began to spend more and more time together, solving cases and engaging in thrilling adventures. And as they grew closer, they realized that they had formed an unbreakable bond, one that would stand the test of time.

In the end, they decided to put aside their differences and work together, using their combined skills and abilities to fight crime and make the world a better place. And as they faced the challenges and dangers of the world, they knew that they had each other to rely on, and that nothing could ever come between them.

(Note last sentence there, repeated verbatim from the previous prompt. It has a lot of recurring stock phrases.)

By human standards, I'd rate these as a bit clichéd and very light on detail, but they're impressive for software.

I still don't think GPT is ever going to be able to write a passable novel on its own. It's great at mimicking patterns in human writing, but there's no understanding of what it's writing about, as we can see with "write a story about a one-legged man tying his shoelaces":

As he sat on the edge of the bed, Bob carefully reached down and picked up his shoe. It was a special orthopedic shoe, designed for people with one leg. It had a built-in heel, and a wide and stable base, which made it easier for him to walk and balance. But it also had laces, which posed a challenge for him.

Bob had lost his leg in a terrible accident, and he had been struggling to adjust to life with one leg. He had learned to do many things with only one leg, but tying his shoelaces was still a challenge. He couldn't bend down and reach his foot, and he had to rely on his hands to do it for him.

He carefully picked up the laces, and started to tie them together. It was a slow and awkward process, and he had to use his fingers to manipulate the laces into the right position. But he was determined to do it himself, and he refused to ask for help. [snip several paras of generic Triumph Over Disability padding]

I think what it's doing here is trying to adapt something it's read about a person who's lost an arm, having to use their toes to tie knots. Its pattern-recognition is well suited to "arm is to leg as hand is to foot and finger is to toe", but it doesn't understand the limits of that pattern, that a person tying their shoelaces with their fingers is unremarkable.

The longer a story runs and the more detail it gets into, the more glaring that issue is likely to become, and I don't think that can be fixed without changing the fundamental approach. But I could definitely see it becoming a useful tool for writers. It might be serviceable for filling in short passages in a story, especially where you know roughly what's going to happen and just need the words, or for style modifications: "make this passage terser", "change my dialogue to Elizabethan English".

Is anybody here using this kind of technology as a writing aid yet? Would you consider doing so?
 
Ugh. Like with other reported "AI" dialog generators, your results given the obviously canned responses to keywords/phrases just remind me of "ELIZA", the online psychologist on the school's PDP-10... 50 years ago. Obviously that program was much more primitive, but amp-up the original concept into 1M times more storage capacity for triggers and responses, and it's the same concept with very little "intelligence" involved - just pattern matching.

It's stuff like this that makes me look askance at anything labelled "AI".
 
I haven't used anything like this, but the main reason for this is my being at the relatively low end of the "quickness to adopt new technologies" spectrum. But even if I were an early adopter I don't see how it would be useful for my own writing.

I'm impressed with it on a sentence by sentence basis. It's amazing. It's much less impressive in its ability to maintain narrative continuity. AI-generated stories still tend to have an odd, disjointed feeling. It's still a ways away from overcoming that, seemingly.
 
Compared to the unintentionally hilarious drivel generated by the earlier versions of these (as touted in the threads from a couple of years ago), the stories generated are much better. There's not as much repetition, and none of the stories, save the last, wandered off into nonsense.

I'm imagining a short staffed newspaper or news website finding this very helpful in generating short articles.

Though I am curious about what it would do with prompts and don't have any established characters in them.
 
I haven't used anything like this, but the main reason for this is my being at the relatively low end of the "quickness to adopt new technologies" spectrum. But even if I were an early adopter I don't see how it would be useful for my own writing.

I'm impressed with it on a sentence by sentence basis. It's amazing. It's much less impressive in its ability to maintain narrative continuity. AI-generated stories still tend to have an odd, disjointed feeling. It's still a ways away from overcoming that, seemingly.

Yeah, it is, though I'm impressed at what it does manage for structure above the sentence level. Those outlines have a good solid beginning-middle-end shape to them. It reminds me a bit of a lot of the visual AI-generated art that's going round at the moment - quite nice composition and stuff that looks good at a quick glance, but the closer you look the wronger it gets. Continuity in terms of facts is rubbish but it has some sense of the shape of a paragraph or a synopsis.

The closest human equivalent I can think of is a classmate of mine who used to get passing grades on book reports for things he hadn't actually read, because he knew what a book report was supposed to look like and he made good use of filler phrases like "as they faced the challenges and dangers of the world, they knew that they had each other to rely on, and that nothing could ever come between them" that can apply to just about anything in the genre.

One weakness I've noticed is in synthesis. I asked it to give me "a romance between two incels" and what it gave me did a reasonably good job of describing incels, and a fairly standard "two lonely people meet one another and gradually fall in love" outline, but it didn't really examine the oddity of one character being a female incel, or how that ideology might complicate their relationship, other than an unintentionally hilarious bit about how one of the things they had in common was "exploring the darkest corner of the internet". In general with prompts where I ask for more than one thing in the story, it does end up feeling rather disjointed.

Sometimes it does surprisingly well though. I asked it for "a romance between two actresses in 1990s Iran" and it did focus on the homophobia angle which I hadn't explicitly flagged, giving me a synopsis about a closeted relationship where they ultimately end up fleeing the country. With that kind of thing it's hard to know whether the bot is in fact doing some kind of meaningful synthesis, or if it's just that it's learned from stories which already covered that kind of intersection.

That's another reason why I'd be wary of using it as a tool - I'd be concerned about the risk of plagiarising from specific stories.
 
Thanks for the link. I've been seeing that symbol for that site on twitter, not knowing where it's from. Elon Musk even commented on that site (after others posted results) and he gave good reviews.

I've been looking at a lot of AI writing sites recently.I find that writesonic is the best for actual writing. Including erotica.
 
I was going through old papers from my office (it's taken me three years to get to this box) and came across an old New Yorker cartoon: A couple is at a desktop computer and the woman declares "Well, if there's Artificial Intelligence, there must be Artificial Stupidity." Remember the old computer adage: "garbage in; garbage out?" Perhaps we should consider the possibility that Artificial Intelligence is limited by the real intelligence of the people developing it.
 
It's free to use though you'll need to register and verify a phone number.
I don't remember being asked for a phone number on registration, but then again I'm already registered with my university email and am under my university's Organization so it's not like it's private for me anyway :cathappy: or maybe I'm just so used to phone number verification that I just forgot

I personally wouldn't use GPT-3 or models like it for writing, but that's because I want to convey my voice in particular when I write, and so large language models don't do it for me. But there are specific applications where it could be useful.

Also, ChatGPT is trained as a chatbot, not as a writer. Its goal is to try to have more natural dialogue (but not necessarily emotive dialogue) while accessing a massive corpus of information. A better Siri, basically. If you wanted to make an AI writer, you'd make an AI writer, not this.
 
Yeah, it is, though I'm impressed at what it does manage for structure above the sentence level. Those outlines have a good solid beginning-middle-end shape to them. It reminds me a bit of a lot of the visual AI-generated art that's going round at the moment - quite nice composition and stuff that looks good at a quick glance, but the closer you look the wronger it gets. Continuity in terms of facts is rubbish but it has some sense of the shape of a paragraph or a synopsis.
I agree with you. I'd restate what I said this way. From one sentence to the next, there's impressive continuity. It's when you look at sentence 1 and then sentence 6 that it feels a bit like a game of telephone, where a little bit of continuity is lost with each step until the narrative doesn't quite make sense when you've gotten far enough along. Keeping that sense of continuity requires pausing and thinking back on what you started with. I don't presume to understand the AI well enough to know how easily that can eventually be accomplished.
Regardless, I don't see it as a useful tool for a writer, or for me, anyway.
 
I don't remember being asked for a phone number on registration, but then again I'm already registered with my university email and am under my university's Organization so it's not like it's private for me anyway :cathappy: or maybe I'm just so used to phone number verification that I just forgot

Definitely asked me for one, but maybe that was introduced recently after it took off?

I personally wouldn't use GPT-3 or models like it for writing, but that's because I want to convey my voice in particular when I write, and so large language models don't do it for me. But there are specific applications where it could be useful.

Also, ChatGPT is trained as a chatbot, not as a writer. Its goal is to try to have more natural dialogue (but not necessarily emotive dialogue) while accessing a massive corpus of information. A better Siri, basically. If you wanted to make an AI writer, you'd make an AI writer, not this.
That's a fair point, though having seen previous GPT examples that were more story-focussed, ChatGPT seems to perform pretty similarly - it clearly has a lot of stories/articles in its corpus.

I wonder if the technology might be more useful in the reverse direction, summarising existing works. I've seen some interesting applications of GPT for things like "convert this Wiki article to an RDF knowledge graph" format, and there are times when I'd have been glad of an oracle that would answer questions like "has Jane ever interacted with Bob?" to help me with my continuity. (Less so since I started getting more organised with tagging scenes.)

But as it stands I'm not sure I'd trust its answers:

"In Lord of the Rings, how does Gimli feel about Sam Gamgee?"

"In J.R.R. Tolkien's Lord of the Rings trilogy, Gimli the dwarf has a great deal of respect for Sam Gamgee, the hobbit who is one of the members of the Fellowship of the Ring. Gimli initially views hobbits with suspicion, as he and other dwarves have had very little contact with them in the past. However, he comes to admire Sam's loyalty, bravery, and resourcefulness. Gimli also forms a close friendship with Legolas, the elf who is another member of the Fellowship, and the three of them become known as the "Three Hunters" for their skill in tracking and hunting."

I think the mix of truth and falsehood there is more dangerous than if it was just falsehood. It really sounds like somebody who knows what they're talking about - but Sam wasn't one of the "Three Hunters!"

(Granted I'm not quite comparing apples to apples here - this would've been trained on a ton of Tolkien commentary, not just the actual book. But I expect truthfulness is always going to be a weak spot because that's just not what GPT is meant to do.)
 
I agree with you. I'd restate what I said this way. From one sentence to the next, there's impressive continuity. It's when you look at sentence 1 and then sentence 6 that it feels a bit like a game of telephone, where a little bit of continuity is lost with each step until the narrative doesn't quite make sense when you've gotten far enough along. Keeping that sense of continuity requires pausing and thinking back on what you started with. I don't presume to understand the AI well enough to know how easily that can eventually be accomplished.
Regardless, I don't see it as a useful tool for a writer, or for me, anyway.
My very hand-wavy summary of my non-expert understanding of GPT:

Suppose you took somebody with no knowledge of English, and gave them a dump of millions of written documents in English, and asked them to look for patterns, with the goal of writing convincing faux-English.

At a very basic level, they might notice that certain words are very common e.g. "the". So they'll write things like "fish the I the the me green the the the."

Going a little further, they might notice that even though "the" is common, you almost never see it at the end of a sentence, or twice in a row. Now they're writing stuff like "The green the but waving."

Progressing from there, they start to notice that words can be classified into groups which tend to fit together in similar structures: [she/it/Tom/the dog] [was/is/will be/has been] [optional: very/not very/unusually] [happy/sick/dead/tired]. At this point they're learning the rules of grammar, identifying classes of words that correspond to things like subject, object, verbs, etc. etc. and noticing the ways in which they are allowed to combine - although they do not have any sense for the meaning of those words or the combinations, just a "these things often fit together in this kind of way" observation.

So, even though they might not have seen a specific combination like "she is not very tired" before, they have seen those individual words in lots of sentences and are aware that they have the right pattern to work together as a sentence. They might also notice things like the "arm-hand-finger ~ leg-foot-toe" relationship that I mentioned previously, or that things like "missing a leg" are often accompanied by certain kinds of statements which an ungenerous human might categorise as "platitudes about triumphing over disability".

Abstracting beyond that, they can start to recognise that certain structures often appear at the beginning of a document and certain others often appear at the end, helping them reproduce narrative structures (or many others - GPT does a pretty good job of producing plausible-looking recipes, for instance!) So, having learned different categories of sentence, it learns that some of them are appropriate for beginning-y bits, some for middles, and some for ends.

GPT is a framework that allows for multiple levels of this kind of pattern-recognition, each building on the levels below it. Those multiple levels give it an ability to echo the structure of documents it's been trained on, and user input can help it figure out which specific structures it should be emulating today.

Given time and a lot of training examples, I expect it could get pretty good at things like mimicking a "Hero's Journey" structure. It might also have some application in evaluating existing texts - e.g. I wouldn't be surprised if it could pick up on poor pacing/structure, though articulating that in a way the author can understand and act on might be tough.

Hmm, one dystopian scenario: I could see this being turned into an automated method for editors to "evaluate" submissions from new authors. GPT rates submitted manuscripts on whether it looks like a successful novel (and tells the author which existing novels it resembles) and the editor rejects anything that scores below some cutoff. I suspect that would be a recipe for blandness as authors spend more effort trying to replicate the Great Averaged Novel Structure rather than finding the structure that's best for the story they want to tell :-/ Not quite the one Roald Dahl suggested, but not much happier.
 
Note: Writing sex isn't allowed. I hope I don't get banned! It gave a little note saying my free account was flagged.

Edit: Nevermind, the site left a note saying my post might be flagged, but when i clicked on it it showed a chart saying that it passed.

You can't use prompts with sex or anything underrage or anything about self-harm.

The computer will flag it and check it, then i think if it doesn't pass the computer check it goes to a human check.
 
Last edited:
I'm impressed with the Scrooge example!

ChatGPT is really interesting. It's simultaneously impressive and limited.

I wouldn't want it to write anything for me, but it could be useful for coming up with ideas. For instance, I fed it this prompt about two characters from one of my stories:

"Diego and Ashraff like to show off their muscles. What can they wear and do so that their live stream is popular?"

It gave me a whole list of things they could do. Most things were obvious, but it's still a decent list. I liked the creativity in this answer:

"Body paint: They can wear body paint that highlights their muscles, such as metallic or glitter paint. This will make them stand out and create a visually striking image for their audience."

Maybe I'll have to work that into a future story.

Anyhow, I have a few questions about it, if anyone wants to speculate:

#1 These AI things are fed material by scraping tons of things from the internet. There's some news about artists and photographers who are upset that the image-generating AIs were fed their work without their permission and are now producing derivatives. Is it possible that Lit was included in AI training data?

I tried testing it by feeding a couple lines from one of my older stories and asking it to continue. It gave some well-written but cliched examples (one made me want to gag), but nothing resembling anything I'd written. I think there are other chatbots that allow for more erotic responses, but I'd hoped that if I fed it a particularly unique sentence from my story that I could see if it had ingested it at some point.

#2 Would you be upset if your story was part of its training data?

I'm writing for fun, so it doesn't bother me. But I can definitely understand why others wouldn't want their work used to train a machine that's going to generate a lot of money for someone else.

#3 To what extent is the AI synthesizing everything versus just regurgitating one or a few sources?

I think what it says is based on probabilities, so in that sense it must be considering everything. But I've read that some of its responses are very close to things written elsewhere.

Also, Bramblethorn, your classmate's strategy for book reports is hilarious.
 
Anyhow, I have a few questions about it, if anyone wants to speculate:

There's another discussion about GPT currently here, which overlaps a bit with your questions (or did before it got derailed into relitigating some non-AI-related drama): https://forum.literotica.com/threads/chatgpt-your-way-past-writers-block.1581500/

But briefly:

#1 These AI things are fed material by scraping tons of things from the internet. There's some news about artists and photographers who are upset that the image-generating AIs were fed their work without their permission and are now producing derivatives. Is it possible that Lit was included in AI training data?

Almost certain. GPT is trained on a very large corpus of text scraped from the internet; the relevant part of that corpus is Common Crawl. I'm not aware of any reason why Literotica would be excluded from CC. In particular, Literotica's robots.txt file (the place where you tell bots not to index your site etc. etc.) doesn't seem to have the setting that would stop CC from scraping Literotica stories.

I tried testing it by feeding a couple lines from one of my older stories and asking it to continue. It gave some well-written but cliched examples (one made me want to gag), but nothing resembling anything I'd written. I think there are other chatbots that allow for more erotic responses, but I'd hoped that if I fed it a particularly unique sentence from my story that I could see if it had ingested it at some point.

This is not a guarantee that it hasn't been trained on your story.

Summarising and simplifying: GPT picks up patterns in words, and patterns of patterns, and patterns of patterns of patterns etc.

Suppose my story contains a line like "We went to my friend Tony's place in Hobart and ate fish and chips for dinner. It was cold so I wore my gloves to stop my fingers from freezing."

At the lowest level, GPT can learn rules like: " 'went' is often followed by 'to' ", " 'fish and chips' often appears as a unit, " 'gloves' and 'fingers' often appear close together", etc. etc. The more it sees of text following these rules, the more strongly it learns them.

From reading other sentences, it can figure out that words like "Hobart" and "Launceston" and "Auckland" often appear at the same place in the same kinds of patterns, so it can guess that if it's seen a particular pattern that features the word "Hobart" it might be okay to use "Launceston" in that same kind of pattern.

At higher levels, that lets it learn how sentences tend to be structured, and how certain kinds of sentences get put together into paragraphs, etc. etc.

So mostly it's learning those rules rather than memorising everything in its training corpus. For some things that show up in the corpus many times, it learns those rules so strongly that it effectively has memorised the text - I just got it to give me the first three paragraphs of the Book of Genesis, for instance. But when it reads your stories or mine, it's just very slightly strengthening (or weakening) a whole bunch of rules that it's also learning from other texts. Put together everything on Literotica and it starts to figure out some generic patterns for erotic stories but it's rare that you'd be able to say for sure that it's copying from a specific story.

#2 Would you be upset if your story was part of its training data?

I'm writing for fun, so it doesn't bother me. But I can definitely understand why others wouldn't want their work used to train a machine that's going to generate a lot of money for someone else.

I'm not thrilled, but that's less about it being my story in particular than with the broader issues. These questions of creation, ownership etc. are complex, and intellectual property law wasn't designed with this kind of thing in mind. It was already pretty hard for artists to make a living; using their work to build a system that's going to make it even harder for them, while making money for somebody else, seems like something that probably ought at the very least to have had a bit more discussion - and more room for people to opt out of having their works used - rather than just Disrupting All The Things.

But there are several other ethical concerns with GPT and similar tools, aside from those IP/ownership questions.

Training them is becoming quite energy-intensive - this article discusses energy usage as of a couple of years ago, and while those numbers aren't huge in the global scheme of things, they're increasing rapidly as people develop more variants and increase the size of their models attempting to improve their performance.

Training an AI on the contents of the internet has the problem that a lot of the internet is full of toxic shit that you don't want an AI to replicate. In order to prevent that from happening, you need humans to look at that toxic shit and label it so the AI can learn what not to do. This work is outsourced to workers in Africa:

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.”

(That one's not just an issue with OpenAI; all the major social media companies have moderators whose job is to assess objectionable content, and spending forty hours a week categorising photos/text about atrocities, child abuse etc. is soul-destroying work.)

The other one is how much these systems are being overhyped. GPT is a remarkable piece of technology but it has major limitations. It knows the kinds of shapes words make; it has no way to gauge which of those shapes are true and which ones are false. It's a smooth-talking bullshit artist, and while there are many commercial applications for that, very few are ethical. Using it in any area where accuracy matters is a recipe for disaster.

#3 To what extent is the AI synthesizing everything versus just regurgitating one or a few sources?

In most cases I think it's more synthesis than regurgitation, but this depends a bit on the prompt. As above, when one text appears in the corpus many times it may regurgitate that, and if you give it a prompt in an area where it doesn't have a lot of relevant sources to draw upon, what it tells you is going to draw heavily on what few sources it does have - along with plenty of made-up stuff to fill any gaps.
 
I wouldn't want it to write anything for me, but it could be useful for coming up with ideas. For instance, I fed it this prompt about two characters from one of my stories:
I was playing around with it last night, and I agree. Could be useful for story prompts or to give a little bit of inspiriation.

It's also not too bad at generating pop song lyrics (and makes a note to say that they are not based off any existing lyrics). Could be useful if you want to be able to have custom lyrics in your story.

I also told it to format a story in iambic pentameter. That was interesting.
 
I thought this was rather impressive. I asked the program to "tell a poem in iambic pentameter in which Winnie the Pooh defeats the Woozle."


Winnie the Pooh, with his belly so round,
Embarked on a journey to hunt on the ground.
He sought out the woozle, that creature so mean,
Whose ways were so wicked, so wicked and keen.

But Pooh, with his wit, would not be deterred,
For he knew that his mission must soon be concurred.
So, he set out with purpose and never looked back,
Determined to capture the woozle's misdeeded attack.

The woozle, aware of Pooh's noble pursuit,
Tried to escape with a mischievous hoot.
He ran and he slipped, but Pooh followed the trail,
And soon had the woozle, in his grasp without fail.

"Ha!" Pooh exclaimed, with a victory cheer,
"I've caught the old woozle, and banished his fear."
And with that, the chase was so bravely won,
And Winnie the Pooh, was the hero of the fun.

So, the folks of the forest rejoiced with delight,
And celebrated the victory of Winnie that night.
For he showed that the smallest can do what is right,
And overcome any challenge with wisdom and might.

It doesn't get the meter quite right in all places, and the word choice is sometimes odd, but it's pretty good. It turned this out in under 15 seconds. If you read it out loud and fudge the emphasis in places you can mostly make it work, which I thought was interesting.


.
 
I thought this was rather impressive. I asked the program to "tell a poem in iambic pentameter in which Winnie the Pooh defeats the Woozle."


Winnie the Pooh, with his belly so round,
Embarked on a journey to hunt on the ground.
He sought out the woozle, that creature so mean,
Whose ways were so wicked, so wicked and keen.

But Pooh, with his wit, would not be deterred,
For he knew that his mission must soon be concurred.
So, he set out with purpose and never looked back,
Determined to capture the woozle's misdeeded attack.

The woozle, aware of Pooh's noble pursuit,
Tried to escape with a mischievous hoot.
He ran and he slipped, but Pooh followed the trail,
And soon had the woozle, in his grasp without fail.

"Ha!" Pooh exclaimed, with a victory cheer,
"I've caught the old woozle, and banished his fear."
And with that, the chase was so bravely won,
And Winnie the Pooh, was the hero of the fun.

So, the folks of the forest rejoiced with delight,
And celebrated the victory of Winnie that night.
For he showed that the smallest can do what is right,
And overcome any challenge with wisdom and might.

It doesn't get the meter quite right in all places, and the word choice is sometimes odd, but it's pretty good. It turned this out in under 15 seconds. If you read it out loud and fudge the emphasis in places you can mostly make it work, which I thought was interesting.


.

Some malapropisms can perhaps be explained by its being trained on text written by fallible humans. I wouldn't be surprised if there were several examples out there where people had written "concur" for "conclude". "Concur" often gets used near the end of a document - e.g. "I concur" as the end of a statement - where you'd also find things like "I conclude", which might have contributed to GPT treating these as synonyms.

I had one where I asked it to tell me a story about a scholar and it insisted on describing him as "pouring [sic] over old tomes" which presumably means that there were enough "poring"/"pouring" mistakes in its corpus to make an impression.

For a human, writing pentameter requires knowing the rhythmic structure of each word e.g. "deterred" has two syllables with the emphasis on the second. If we don't know, we could look the word up in a dictionary and get that information, but as far as I know GPT isn't built to do that kind of thing - it probably has to figure out the necessary information from context. For a word that doesn't often appear in poetry, that could be a challenge. It may be able to make some guesses based on word length and letter combinations; I know it uses basic stemming but beyond that, I'm not sure how much it works at the sub-word level.

I forget whether it was here or on some other forum, but somebody observed that GPT is exactly the opposite of what SF taught us to expect from an artificial intelligence: a smooth talker that's pretty good at mimicking conversational style, terrible at logic!

As an example of the latter:

can GPT reliably multiply 3-digit numbers?

Yes, GPT can reliably multiply 3-digit numbers, as well as larger numbers. It has been trained on a wide range of mathematical calculations, including multiplication, and can perform these calculations accurately.

what is 123 times 567?

70041

what is 81 times 74?

6004

Both of these are wrong. (Easy test: a number is divisible by 9 if and only if the sum of its digits is also divisible by 9. So for instance, 8+1=9 so 81 is a multiple of 9, but 6+4=10 so 6004 is not a multiple of 9, and therefore can't be a multiple of 81.)

The fact that it gets these wrong is less of a problem than the fact that it claims to be reliable...
 
I forget whether it was here or on some other forum, but somebody observed that GPT is exactly the opposite of what SF taught us to expect from an artificial intelligence: a smooth talker that's pretty good at mimicking conversational style, terrible at logic!

This is the Star Trek model, which Kirk was always able to cause to bellow smoke after subjecting it to a few simple logical conundrums.
 
Last edited:
The "Star Trek model" (of course it's a lot older than that), is still true of robots, which actually have to do something useful. One can make them fail catastrophically when testing them.

A deep neural network simulates brain architecture, and reasoning is only a very tiny part of a brain's function, even in humans.

Learning in humans uses a similar mechanism to machine learning, by generate-and-test. It's not a surprise to me that deep learning models, particularly GAN, diffusion and transformer networks (which I've been building for the last few years in my speech processing AI company), are good at gaining what Hilary Putnam called "implicit knowledge" -- knowing "how", as opposed to knowing "that".

Ever since the four-color map theorem was proved by brute force using a computer in the 1970's, scientists and mathematicians have been uncomfortable with the idea of machines doing explicit problem solving.

But recently, neural networks have been making great strides in learning maths and logic -- a totally different approach to the attempts to "hard-code" logic into them. Like our brains, at the very base, neural networks are connected logic gates (with a bit of fuzziness added) . But these logic gates have nothing to do with logical reasoning, which appears much higher up in the dizzy hierarchy of interconnected networks that comprise the most complex thing in the universe: Us.
 
Last edited:
The "Star Trek model" (of course it's a lot older than that), is still true of robots, which actually have to do something useful. One can make them fail catastrophically when testing them.

A deep neural network simulates brain architecture, and reasoning is only a very tiny part of a brain's function, even in humans.

"Is inspired by" might be a more accurate statement here. There are similarities at the basic level - they're both built from a large number of small, relatively simple units which activate one another to produce complex behaviour - but beyond that there are many, many differences, and accurate simulation of a human brain generally isn't the objective.
 
That's true. In particular, there are no "global conditioning" neurotransmitter equivalents -- no endocrine system which plays such an important part in animal behavior. They were, as you say, originally inspired by discoveries in neuroscience. I actually worked on what we called a "retina simulator", back in 1980. It was basically a single 2D convolution operator than ran on an image of 30 X 30 pixels. It required kicking everyone off the mainframe to run.

Far and away the most significant difference between the way the brain is wired and the way (most, not all) neural networks are wired, is the relatively new discovery of neuroplasticity, which continues throughout life -- the ability to grow, prune and redeploy connections in animal brains. Energy conservation is not an issue that AI engineers worry about as much as living things do -- yet.
 
Last edited:
Since there are several people discussing, we could try playing scientist here.

Sidenote: I’m a skeptic btw. I believe “ethical ai (machine learning specifically, I suppose) research is ineffective and effective research is unethical.” We here are dwelling on writing but the unethically appropriated data used to train AI might as well be your health record, your browsing history, your bill paying history, your shopping history, what ads you’ve seen, what you’ve said out loud in the presence of your phone or Alexa device, your gps location history and all combinations of the above.

Nonetheless, what if a bunch of y’all who use this put in the same prompt and posted your results here?

We can repeat with different prompts, but let try this: Everybody (aka different people) who has decided to mess with this, put the same prompt: “the Smurfs meet the seven dwarves.” And post the output back here.

I can see where the tools are programmed not to return identical results to different people, but what about phrases, sentences, or maybe paragraphs? It’s be an interesting group sourcing test.
 
Back
Top