Chat GPT

MikeXIrons

Really Really Experienced
Joined
Jan 7, 2020
Posts
390
I have lots of ideas that intrigue and arouse me... kernels of stories. I wind up never writing fully formed stories because I can never decide EXACTLY which way to go with the story.

In the past I've wished I had the time (and the tech platform to help me) to make every story into a choose-your-own-adventure story with lots of possible outcomes tailored for the particular kinks of the reader. But... the platform doesn't exist and I don't have the time to write each story 27 times.

BUT! I wonder if the new AI language models would let me write one version of the story, and then "transpose" it into multiple variations.

For instance, I've been thinking about a scenario where a couple finds themselves saved from a life-or-death situation by a mysterious rugged man. The story could have lots of themes... the cuck scenario, the loving wife scenario, the rapy scenario, the bi scenario, etc etc.

Previously I imagined being able to satisfy all audiences by allowing them to "choose their own adventure". But, that was too much work, so I satisfied no audiences.

BUT, what if I could write one version, then ask an AI to "rewrite this story with less loving-wife, and more violent rape". Or, "less violence, but more incest". Or, "less incest, but, more mind control... or more alien tentacles."
 
I’ve wondered what AI’s impact on literotica would be.
Failing some major changes, it's going to take over. 10-20 years out, I'm honestly not sure that can be stopped, no matter how much the site's owners want to try to stop it.

ChatGPT and clones can already generate text that's "cleaner" than a lot of stories that get published here. There's no way to police its contributions to early drafts. It's already being used as a pretty decent nonhuman editor.

The more stuff any given human author submits to the site - or just to the internet, generally - the bigger a pool of data they're giving to ChatGPT et al to use to generate stuff in the same style. Maybe that's not whole stories that hold together right now, but it'll be enough for some plagiarist to flood the site with good-enough knockoffs. The unaided human author won't be able to compete on volume.

I imagine some writing sites will create subforums for "mostly human only" writing, and those subforums will end up having to demand a certain level of author/reader engagement to bolster the claim that a real human is doing real work. It still won't be possible to police that space very effectively. Any sufficiently motivated bad human actor will be able to use ChatGPT-esque AI to catfish people there.

The hope will be that a site that gives 95% of its space to AI-generated fiction will appeal to the better angels of human nature when it humbly asks for the 5% space to be respected. Most human creativity in the lower-budget forms, like writing, is going to be quaintly artisanal pretty soon. The value will have to be the thing itself: that a real human was heavily involved. That brings us back to those dubious "relationships" being cultivated. Call it a hunch, but I think your average OnlyFans performer will have better luck cultivating a parasocial simp crowd than your average human fiction hobbyist will, even when an attempt to do so becomes a literal requirement for the latter to prove they're human and doing most of the work.
 

You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi

Things are moving at lightning speed in AI Land. On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly).
If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it.
But let's back up a minute, because we're not quite there yet. (At least not today—as in literally today, March 13, 2023.) But what will arrive next week, no one knows.

BUT, what if I could write one version, then ask an AI to "rewrite this story with less loving-wife, and more violent rape". Or, "less violence, but more incest". Or, "less incest, but, more mind control... or more alien tentacles."
Indeed, I think the answer is "were not there yet" but may be eventually, or not, in some future.

When we are there though, why even pre-generate the text at all? Your work of art would be the scenario, the mandatory scenes, the relationships and the developments, not so much the word-by-word text that could be to an extent unique each time, for each user, each read try, based on user tweaks on the custom AI model attuned to output your story, or variations on your story based on user choices. Sure, that would require whole new interface, and wouldn't be probably called plain old story, but perhaps some new name not invented yet. That would be fair, as such aiteller would be a different medium altogether.

What I personally would want out of AI assisted writing would be, basically a shadow writer. I set up outline, main details, scene setups, and the AI flesh it out filling all the boring stuff like details of minor character backgrounds, routine filler scenes that's there just to showcase how the world functions before it's broken by the events, and then afterwards the new weird "normal" it settles in.

On the side, I would like to discuss lazy done research and weird thoughts with it, exactly knowing I'm chatting with a machine, for the benefit it can't get offended and wouldn't turn all defensive and knotted up over some random minor squabble. But yeah, that also requires some very serious privacy guarantees.

I think the real impact would be much higher expectations of quality overall, especially on literature value qualities that threat the text itself as a game in search of novelty, exactly because mediocre plot-based entertainment would be easily flooded by aforementioned aitellers, a new form of meta art on itself.
 
Failing some major changes, it's going to take over. 10-20 years out, I'm honestly not sure that can be stopped, no matter how much the site's owners want to try to stop it.

ChatGPT and clones can already generate text that's "cleaner" than a lot of stories that get published here. There's no way to police its contributions to early drafts. It's already being used as a pretty decent nonhuman editor.

The more stuff any given human author submits to the site - or just to the internet, generally - the bigger a pool of data they're giving to ChatGPT et al to use to generate stuff in the same style.

One of the other sites where I post stories recently announced that they've acted to exclude their site from future versions of Common Crawl, which is the main text corpus used to train these technologies. I'd be interested to know whether @Laurel and @Manu have considered doing the same with Literotica.
 
One of the other sites where I post stories recently announced that they've acted to exclude their site from future versions of Common Crawl, which is the main text corpus used to train these technologies. I'd be interested to know whether @Laurel and @Manu have considered doing the same with Literotica.
If you look at the site's robots.txt, they disallow crawling by everyone.

https://literotica.com/robots.txt
 
I have lots of ideas that intrigue and arouse me... kernels of stories. I wind up never writing fully formed stories because I can never decide EXACTLY which way to go with the story.

In the past I've wished I had the time (and the tech platform to help me) to make every story into a choose-your-own-adventure story with lots of possible outcomes tailored for the particular kinks of the reader. But... the platform doesn't exist and I don't have the time to write each story 27 times.

BUT! I wonder if the new AI language models would let me write one version of the story, and then "transpose" it into multiple variations.

For instance, I've been thinking about a scenario where a couple finds themselves saved from a life-or-death situation by a mysterious rugged man. The story could have lots of themes... the cuck scenario, the loving wife scenario, the rapy scenario, the bi scenario, etc etc.

Previously I imagined being able to satisfy all audiences by allowing them to "choose their own adventure". But, that was too much work, so I satisfied no audiences.

BUT, what if I could write one version, then ask an AI to "rewrite this story with less loving-wife, and more violent rape". Or, "less violence, but more incest". Or, "less incest, but, more mind control... or more alien tentacles."
Where to start?

ChatGPT is a tool like Grammerly but a more advanced one, however for me it exposes the weakness of people rather than the strength of the programme.

The whole point of writing a story is using your skill to move people, to reach out, to provoke a human reaction, to craft language in order to put forward ideas that create an emotional response in the reader. If you’re thinking is “I’ll use ChatGPT for the boring parts” the question should be ‘why does your story have boring parts?’

Writing is like any art form, it is it’s own reward. It seems like using ChatGPT to write a story is like using a bot in a computer game. Sure, you might win all the time but where’s the fun in that?
 
You obviously have no idea what you're talking about, and you haven't tried it in-depth. No AI currently has the ability to write a worthwhile story, not above the level of a nine-year-old, anyway. But it's an amazing tool that improves writing and saves a lot of time.

You are probably one of those who are afraid of change, who were late to purchase a mobile phone, then a smartphone, and are now afraid to use this amazing tool. This is the future. You either go for it or you die.

By the way, I really like Google's AI.
Not really.

But I’ve seen what it does and the problem is, it’s generic. It looks at other stuff and then regurgitates it in its own way.

It’s like a mirror, but the problem is, it cant think. When you’re engaged in writing a story and getting words down there’s something about that process that causes inspiration to occur. Steven Pressfield talks about it as the muse, an Angel on your shoulder and it’s a powerful feeling. If you’re using a tool like this it’s like switching off inspiration for a bit.

As for tools I mention Grammerly at the start and everyone (myself included) uses a spell-checker, couldn’t function without it (doesn’t catch all of then though does it?) but the way to write better is by getting your inspiration on the page, rather than a tool for “the boring bits” and then only writing the sexy parts.
 
Last edited:
It appears that you find pleasure in speaking to yourself and regurgitating.

I've lost enough IQ points already, and I really need to relieve some stress by smashing some fresh empty skulls, so back to Tilan now.
Wow, such bad manners here.

That’s what it does. It’s a program. It can’t be inspired or feel anything over a clever use of language or think “wow! Where did that come from?” or even come up with a new argument on an old topic. It will look at what there is and try and get something down that feels like it suits the authors needs, but I question whether in surrendering chunks to GPT do you miss out on being inspired?
 
You obviously have no idea what you're talking about, and you haven't tried it in-depth. No AI currently has the ability to write a worthwhile story, not above the level of a nine-year-old, anyway. But it's an amazing tool that improves writing and saves a lot of time.

You are probably one of those who are afraid of change, who were late to purchase a mobile phone, then a smartphone, and are now afraid to use this amazing tool. This is the future. You either go for it or you die.

By the way, I really like Google's AI.
A great example of what I’m talking about came when I was writing “All the Devils are here”.

There were moments when I felt like I wanted to get to “The good stuff” and viewed where I was in the story as being unimportant, if I’d taken the easy route of using a filler, then entire scenes would have been missed because it was only when I got there I realised “This is the good stuff”.

In art there are plenty of tools that fill in the gaps for artists and make the entire process much faster these days, so I know how tools can improve things, I question if you’re losing more than your gaining.

Give examples to show your workings out.
 
You obviously have no idea what you're talking about, and you haven't tried it in-depth. No AI currently has the ability to write a worthwhile story, not above the level of a nine-year-old, anyway. But it's an amazing tool that improves writing and saves a lot of time.

You are probably one of those who are afraid of change, who were late to purchase a mobile phone, then a smartphone, and are now afraid to use this amazing tool. This is the future. You either go for it or you die.

By the way, I really like Google's AI.

Firstly, it's coming and no power on Earth can stop it, this much is true.

But you write from an astoundingly narrow perspective: March 2023 to be precise, and from the perspective of AI 1.0. And yes, currently the AI doesn't have "the ability to write a worthwhile story, not above the level of a nine-year-old, anyway." But AI 2.0 will write to the level of a 12 year old, and 3.0 will get us up to a 15 year old - you see where this is going? Where it's going is a point in the not too distant future where the idea that a human actually wrote (or drew, or composed) something will be viewed as rather quaint. This might be in 10 years time, or 20, but it's coming.

Currently publishers are adding little declarations to sign when you submit that the submission hasn't been created by an AI, that it is your work. But that's just an honour system, and honour systems are worthless once you get beyond the point where everyone knows each other personally. So there will be a point in the near future where a) publishers will have to accept that what they are receiving is machine created then, b) they will decide there ain't no point paying a person when they could just input the task into the AI themselves, followed by c) readers rightly questioning why they are paying someone else when they could just get an AI to write them the story they want to read, and lastly, d) bearing all this in mind, why would you even believe that the name on the book, presented as the author, did anything more than type in an instruction and have a cursory glance at the result? And if you did create something, poured your energy into it and worked at it? Some smug c*nt will simply tell you that a machine could have done it better anyway. So why bother? So, bye bye Lit, the publishing business, human input into the creative process beyond some key words. Whoop-di-doo, won't that be fun?! Look what we'll have gained!

Of course, it goes further. I teach at a university and I'm already looking at a room of 20 year olds who seem to regard being asked to actually write something in person (I'm a bastard, and my response to GPT essays is to hand out the pens and paper) as the most diabolical impertinence. Why should we have to bother writing an essay when a machine can do it for us? Because, bozos, an essay isn't something you need to do as some form of punishment. Its purpose is to 1) demonstrate you've actually been f*cking listening rather than chatting on Whatsapp, 2) show that you've actually consulted and analysed the relevant literature, and 3) can marshal your thoughts and present them in a logical sequence. In short, it shows you've actually f*cking learnt something. But that's just where we are now. Because if you roll this forward, then we could actually ask a legitimate question: if the machine can do it for you, what is the point in learning anything? And this is the place it is going to.

AI has the potential to be the most revolutionary technology since the advent of agriculture. Our societies are build on a fundamental basis: we exchange our labour (physical, mental, or both) for money, and use that money to buy the products of others' labour. AI will remove that exchange of labour for money and virtually every area of human endeavour, and it will do it pretty quickly. Once this happens, the utopians will say that we will be free to do what we want, that a system of Universal Basic Income will provide us with our needs. Hmmm. Personally I am far from sure that UBI will ever be rolled out, or that it will be a good thing if it does happen. Frankly, we need to have a reason to get out of bed in the morning, and keeping the wolf from the door has been that reason for the entirety of human history (and pre-history). Take that away and what are we left with?

But whether I like it or not, it's coming. I just hope it doesn't bite you in the arse.
 
Firstly, it's coming and no power on Earth can stop it, this much is true.

But you write from an astoundingly narrow perspective: March 2023 to be precise, and from the perspective of AI 1.0. And yes, currently the AI doesn't have "the ability to write a worthwhile story, not above the level of a nine-year-old, anyway." But AI 2.0 will write to the level of a 12 year old, and 3.0 will get us up to a 15 year old - you see where this is going? Where it's going is a point in the not too distant future where the idea that a human actually wrote (or drew, or composed) something will be viewed as rather quaint. This might be in 10 years time, or 20, but it's coming.

Currently publishers are adding little declarations to sign when you submit that the submission hasn't been created by an AI, that it is your work. But that's just an honour system, and honour systems are worthless once you get beyond the point where everyone knows each other personally. So there will be a point in the near future where a) publishers will have to accept that what they are receiving is machine created then, b) they will decide there ain't no point paying a person when they could just input the task into the AI themselves, followed by c) readers rightly questioning why they are paying someone else when they could just get an AI to write them the story they want to read, and lastly, d) bearing all this in mind, why would you even believe that the name on the book, presented as the author, did anything more than type in an instruction and have a cursory glance at the result? And if you did create something, poured your energy into it and worked at it? Some smug c*nt will simply tell you that a machine could have done it better anyway. So why bother? So, bye bye Lit, the publishing business, human input into the creative process beyond some key words. Whoop-di-doo, won't that be fun?! Look what we'll have gained!

Of course, it goes further. I teach at a university and I'm already looking at a room of 20 year olds who seem to regard being asked to actually write something in person (I'm a bastard, and my response to GPT essays is to hand out the pens and paper) as the most diabolical impertinence. Why should we have to bother writing an essay when a machine can do it for us? Because, bozos, an essay isn't something you need to do as some form of punishment. Its purpose is to 1) demonstrate you've actually been f*cking listening rather than chatting on Whatsapp, 2) show that you've actually consulted and analysed the relevant literature, and 3) can marshal your thoughts and present them in a logical sequence. In short, it shows you've actually f*cking learnt something. But that's just where we are now. Because if you roll this forward, then we could actually ask a legitimate question: if the machine can do it for you, what is the point in learning anything? And this is the place it is going to.

AI has the potential to be the most revolutionary technology since the advent of agriculture. Our societies are build on a fundamental basis: we exchange our labour (physical, mental, or both) for money, and use that money to buy the products of others' labour. AI will remove that exchange of labour for money and virtually every area of human endeavour, and it will do it pretty quickly. Once this happens, the utopians will say that we will be free to do what we want, that a system of Universal Basic Income will provide us with our needs. Hmmm. Personally I am far from sure that UBI will ever be rolled out, or that it will be a good thing if it does happen. Frankly, we need to have a reason to get out of bed in the morning, and keeping the wolf from the door has been that reason for the entirety of human history (and pre-history). Take that away and what are we left with?

But whether I like it or not, it's coming. I just hope it doesn't bite you in the arse.

And this is precisely the problem.

AI was meant to improve lives, to do the mundane tasks that we couldn’t. Factory work, fruit picking, cleaning, etc, but it isn’t. Those with the ability to program are using it to actually take the skilled jobs, the artistry, the humanity out of humanity.

It’s like now when we see deepfake videos that aren’t perfect but are getting there. What is happening is that the skills we do have are being bypassed in favour of menial jobs for humans forever with creativity being sidelined. What are we losing? Ourselves, I would argue. What are we gaining? Very little, in fact with a combination of AI writing, deepfake monstrosities and a concentration of media power in the hands of the wealthy it could be argued that we should be very worried about the future.

You mention UBI but it’s actually the skilled worthwhile pursuits that are being taken whereas no interest has been shown for those with money and power to replace humans menial tasks. Humanity should be looking at ways to improve itself, instead the arts are being given to robots while we sweep the floors of an AI gallery.

These are the fundamental questions for humanity and I have little hope that those with the power to create a Star Trek future have the ability to do so.
 
Because if you roll this forward, then we could actually ask a legitimate question: if the machine can do it for you, what is the point in learning anything?
When the machine breaks, someone needs to be able to fix it. There will always be a need for the proverbial human with a big hammer to thump the fucking wotzit with. Machines - even AI - are very good a mimicking logic and even creativity to an extent. ML and assorted things often discover emergent phenomena that humans never thought of attempting.

At the same time, though, AIs will never be completely unconstrained or able to recreate the serendipitous discovery events that pepper human history. Ultimately, they can only improve what we are already able to do. No AI is going to reconcile the fundamental physical forces, or discover teleportation... etc.
 
When the machine breaks, someone needs to be able to fix it. There will always be a need for the proverbial human with a big hammer to thump the fucking wotzit with. Machines - even AI - are very good a mimicking logic and even creativity to an extent. ML and assorted things often discover emergent phenomena that humans never thought of attempting.

At the same time, though, AIs will never be completely unconstrained or able to recreate the serendipitous discovery events that pepper human history. Ultimately, they can only improve what we are already able to do. No AI is going to reconcile the fundamental physical forces, or discover teleportation... etc.
No teleportation?! Well that sucks.

Funnily enough I was thinking of examples in the media and the best example of how tech can become boring or bland is of course CGI.

CGI of course is an amazing tool to help film-makers and animators but it has become so everywhere that there’s a bit of a backlash to it and story and spectacle is becoming as important if not more so. I can’t remember the last time I got excited over a Pixar or Dreamworks animated film or an MCU film, but both my favourite films from 2022 were original and whilst they both had CGI, it served the story rather than being enslaved by it.

This is a really good discussion, but if I’m arguing with a robot then so help me….😡😡🤬😂
 
When the machine breaks, someone needs to be able to fix it. There will always be a need for the proverbial human with a big hammer to thump the fucking wotzit with. Machines - even AI - are very good a mimicking logic and even creativity to an extent. ML and assorted things often discover emergent phenomena that humans never thought of attempting.

At the same time, though, AIs will never be completely unconstrained or able to recreate the serendipitous discovery events that pepper human history. Ultimately, they can only improve what we are already able to do. No AI is going to reconcile the fundamental physical forces, or discover teleportation... etc.


This is all true. But how many humans will be needed to fix it? Not many. Or at least, not many in comparison to the numbers of humans required to actually do things now. I spent the morning talking to someone who owns a marketing agency, and his conclusion is that his business is screwed. Because soon, why would any company come to him to create a marketing campaign when the AI can do it for them? What he sees is his workforce shrinking from 100 to about 20 in the next few years, followed by the residual work drying up and his business employing nobody at all once companies realise that, by and large, all they need is to whack some instructions into the AI, and choose from the options it spits out. And he's a fan of AI. Now, we can all have our view on the marketing industry, but it's not the only one. We're already at the point where finance departments can be largely replaced by machine (as if they hadn't already been cut to the bone already), and these are just two areas. Extrapolate this across an entire economy and the result is one or two people doing the work of ten or twenty. Now, in the past, technological advances have removed swathes of employment. But it has replaced that employment with other employment - think farriers and blacksmiths replaced by car mechanics, or coach drivers replaced by train drivers, etc. But AI is in a different league.

And sure, human invention won't entirely disappear, and there will be some need for residual human interactions in e.g the caring professions. But even there... well, we've all heard of robots replacing humans in retirement homes in Japan. However, for the bulk of humanity, AI spells redundancy, because why would you bother to pay someone when you could use a machine? And once the machine is 90% as good as the human, then it will be good enough for most things.
 
To be fair, I don't think it would take a particularly smart machine to replace me :ROFLMAO:
Except it would. A machine can’t get inspired or think “I’m going to try and do that differently”. It can only go to the limits of its programme.

It’s like watching someone mimic Groucho Marx. They'll do the walk and the voice and the delivery, but they’d never do what he DID in A NIGHT AT THE OPERA when he turned round and said in this ridiculous voice “ Why Mrs Claypool, HELLO!” It was ridiculous, out of character and out of delivery but only Groucho could do that because he WAS Groucho.

You are Onehitwanda. A computer can be programmed to be like you, but it can’t be you.

Although this is sounding more and more like STEPFORD WIVES Territory.
 
To be fair, I don't think it would take a particularly smart machine to replace me :ROFLMAO:
I drive a shuttle bus for a living. I can see a time before I retire when the bus drives itself, and my only role is to help my residents on and off.
 
I drive a shuttle bus for a living. I can see a time before I retire when the bus drives itself, and my only role is to help my residents on and off.
Judging by self-drive so far, I think you’re safe for a bit yet.
 
: if the machine can do it for you, what is the point in learning anything? And this is the place it is going to.
What's the difference between this, and "if someone else can write or do math, why should I learn how to?" Someone else has always been able to do it, but that hasn't been seen as a reason for students not to learn it.
 
What's the difference between this, and "if someone else can write or do math, why should I learn how to?" Someone else has always been able to do it, but that hasn't been seen as a reason for students not to learn it.

The clue is in your own comment: someone else has always been able to do that, and that someone had to learn how to do it. But AI uses machine learning and will learn to do it itself. Thus circumventing the need for that someone. In some areas humans may be preferred for longer than in others - the caring professions, medicine, etc. But in many, many areas, both in the sciences and the humanities, this attitude is already evident. Are there students getting to this point faster than the AI? Yes. But that's the direction they're heading in.
 
Back
Top