What will AI still struggle to do for writing in the future?

Joined
Feb 13, 2018
Posts
8
Hey, I have a confession to make. It's embarrassing, more embarrassing than any kinky shit in the stories. Long story short, I sent an AI assisted story for approval. (I had not known of any policy against AI generated content in advance.) I don't plan to do this again. I am now aware of the policy and had a good laugh about the whole thing. The story was shite, too, I don't even feel bad about that part!

I had toyed with the thought of prefacing the story with a disclaimer about how it's AI generated, but then a thought occurred to me: Would anyone be able to tell?

I got a good laugh when the story was VERY QUICKLY kicked out of the queue, specifically citing the modern policy against AI works. Well, there's my answer. Heh.

I have a surprisingly pessimistic view of AI. Not in terms of it taking over the world, I think that's something far out there. But in terms of basic operation. I mean, my story was quite bad. I actively fought the generator during the writing process. Because it kept getting stuck into incredibly generic loops. Repeating language about the characters' hearts racing, and racing, and throbbing, and on and on. I could go into detail about how AI's ROC curves are upside down, how the only thing that has changed since the 80's is the heights of hype AI has received. It's not something I would trust to drive a car, even though human drivers will always make mistakes and drunk drive and so on. But I suspect that AI will get better and better in time.

Eventually, the algorithms can be trained to keep repetitive descriptions to a minimum in stories. But I'm curious about the moderation process--does Literotica apply any automated AI *detectors*? Or just use good old fashioned common sense?

What do you think about dedicating a story category for AI submissions? I believe there may be art worth finding there some day, as long as we clearly delineate what is human vs AI created. Call it a new humor category.

When will AI come up with a musical album good enough that people actually want to buy it?
 
What do you think about dedicating a story category for AI submissions? I believe there may be art worth finding there some day, as long as we clearly delineate what is human vs AI created. Call it a new humor category.
You might want to look at the multitude of threads about AI these last few months.

We've had a bunch of people very upset, some to the point of removing their stories and leaving the site, because the moderation process has been rejecting their stories, suspecting them of being generated by AI.

We're not quite ready for levity just yet - the site is trying to cope with a significant problem.
 
bafkreie4puhfmgg6fox7hcsvor5fxgzgjcctjg7h3lehvracbj53vymgqq@jpeg
bafkreigrvm74i7pu7j4lw4ossryx3q6kp55szyenzcgmx66qx6ywdx2kve@jpeg


Artificial Inanity at its best. There is going to be a booming proofreading / logic checking business as humanity rolls out these idiot systems. I, for one, am stocking up on popcorn.
 
As an editor, I'm already putting in an order for a new yacht.
And make a YouTube video about how to spot AI. AI videos are a monetization magnet, I've never had my viewing so frequently interrupted.
 
bafkreie4puhfmgg6fox7hcsvor5fxgzgjcctjg7h3lehvracbj53vymgqq@jpeg
bafkreigrvm74i7pu7j4lw4ossryx3q6kp55szyenzcgmx66qx6ywdx2kve@jpeg


Artificial Inanity at its best. There is going to be a booming proofreading / logic checking business as humanity rolls out these idiot systems. I, for one, am stocking up on popcorn.
One of these days you will get arrested for torturing silicon, you carbon based elementalist!
 
bafkreie4puhfmgg6fox7hcsvor5fxgzgjcctjg7h3lehvracbj53vymgqq@jpeg
bafkreigrvm74i7pu7j4lw4ossryx3q6kp55szyenzcgmx66qx6ywdx2kve@jpeg


Artificial Inanity at its best. There is going to be a booming proofreading / logic checking business as humanity rolls out these idiot systems. I, for one, am stocking up on popcorn.
I've had many conversations like this with ChatGPT. If you don't know the subject, it will try to snow you with something that sounds plausible but is incorrect.
 
I've had many conversations like this with ChatGPT. If you don't know the subject, it will try to snow you with something that sounds plausible but is incorrect.

It's a toddler trying to learn. It keeps trying thing after thing until it frustrates you into giving it the correct answer. Then it has it. Or it thinks it has it. How much AI/learning systems are going to be warped by malicious input? To me, that's a little scary.

To answer the question the OP posed: humor. Most humor is about the human condition, paradoxes, or inconsistencies in the language (word play, for instance). I don't see encyclopedic knowledge underpinning that. What would it do with Who's on First? unless somebody specifically programmed detecting the Abbott and Costello banter schtick?
 
It's a toddler trying to learn. It keeps trying thing after thing until it frustrates you into giving it the correct answer. Then it has it. Or it thinks it has it. How much AI/learning systems are going to be warped by malicious input? To me, that's a little scary.
I recommend that anyone interested in AI read Ted Chiang's story: The Lifecycle of Software Objects

It goes into the kind of training that it takes to create an AI that mirrors our values. In the story, it takes just as long to train an AI to think like a human as it does to raise a child.

Other AIs are trained in shorter times and they are predictably more chaotic (think if 4Chan trained an AI).
 
I know a lot of people who are as bad at riddles as Chat GPT . But at least they fess up to it. If OpenAI dialled down Chat GPT's smugness, it would be a lot better.
 
People seem to not realize that AI isn't some new sort of sentient being. It's just software with the capability of reviewing a huge amount of data in a very short time. It commits what it "learns" from that data and applies that "knowledge" to requests. Like any other software, it makes mistakes, just like an operating system has "back doors" the programmer thought he or she closed, but didn't. The difference between AI and any other software is it has the capability to "learn" from its mistakes. In that respect its more like a child. It has a basic set of "knowledge", but it "learns" by making mistakes and then having a human correct those mistakes, just like an adult would correct a child.
 
If you consider that AI is relatively new - I mean, a couple of years at this current level.... And already it can be taught to do a LOT of things well... I have no doubt that it's only a matter of a few years or so before it will be very capable of creating written word, images, code, basically everything a human can do, to such a degree you wont be able to tell the difference.

Will it be writing Shakespeare? IE Full on Vincent Van Gogh paintings and tv/movie scripts worthy of an oscar - well, eventually yes... Because it's going to learn. That's what AI is all about.

I have no doubt my job will be made redundant by AI inside of the next 5 years. I was TOLD that was going to happen about 10 years ago, but the tech at that point was not there. Now, it's so close...

(That's not a creative type job, but it involves just as complex problems to solve as writing something amazing and not obviously "AI".)

The experts believe that as the AI softwares continue to learn, they'll grow exponentially better... depending on what limitations are set by the humans in control of them.

I use Chat GPT now and then... and I'm seeing changes in its abilities and responses... The very fact you can ask it the same question twice, and get different answers speaks to the fact that it is "thinking". It's not answering 1+1=2... That's 1960's.

(And... as for at what point we lose the battle to AI as a species... we wont know when it happens... there are numerous possible outcomes... But I for one welcome our new overlords.)
 
If you consider that AI is relatively new - I mean, a couple of years at this current level.... And already it can be taught to do a LOT of things well... I have no doubt that it's only a matter of a few years or so before it will be very capable of creating written word, images, code, basically everything a human can do, to such a degree you wont be able to tell the difference.

Will it be writing Shakespeare? IE Full on Vincent Van Gogh paintings and tv/movie scripts worthy of an oscar - well, eventually yes... Because it's going to learn. That's what AI is all about.

I have no doubt my job will be made redundant by AI inside of the next 5 years. I was TOLD that was going to happen about 10 years ago, but the tech at that point was not there. Now, it's so close...

(That's not a creative type job, but it involves just as complex problems to solve as writing something amazing and not obviously "AI".)

The experts believe that as the AI softwares continue to learn, they'll grow exponentially better... depending on what limitations are set by the humans in control of them.

I use Chat GPT now and then... and I'm seeing changes in its abilities and responses... The very fact you can ask it the same question twice, and get different answers speaks to the fact that it is "thinking". It's not answering 1+1=2... That's 1960's.

(And... as for at what point we lose the battle to AI as a species... we wont know when it happens... there are numerous possible outcomes... But I for one welcome our new overlords.)
Itā€™s not thinking. Thatā€™s not how it works. Itā€™s, to be very reductive, the most complex autocomplete program on the planet. It uses the same basic techniques your phone does to give suggestions for what you should say next in an email reply, amped up to 11.

None of this stuff is ā€œthinking.ā€ None of it is close to AGI (Artificial General Intelligence). It is a flaw of language and peoplesā€™ tendency to anthropomorphize that we use terms like ā€œthinkā€ in relation to it. Your phoneā€™s autocorrect isnā€™t thinking; neither is ChatGPT.
 
A.I. Artificial Intelligence. No heart, no soul. No life experience. No sorrow. No pain. No love. No hate. It can only show you what is programmed into it. It cannot feel and it does not care.

We as writers make mistakes. We have everything AI does not. Bad grammar. Spelling. Typos. We may double space from time to time. AI tries to be too perfect, and at times is. I don't fault anyone for the occasional misspelled word or if a line they wrote baffles me because it doesn't make sense. At least I know a human wrote it.
 
I understand the anxiety people feel about AI but I think people are underestimating itā€™s potential.

What are a humanā€™s motivations for writing? What makes you think an AI canā€™t emulate those motivations?

It isnā€™t limited to the goal of trying to write a good story, itā€™s also being programmed to try to connect with people - to garner attention, to get people to engage with it. Trial and error mixed with its intentional propensity to hallucinate- to make mistakes and learn from them will continue to broaden its reach and appeal while emulating similar motivations a human author might have.


ChatGPT is still in its infancy - just recently passing the one year mark. Iā€™m not saying itā€™s a good thing or a bad thing, just that it isnā€™t going away and underestimating what it may become is foolish.
 
Last edited:
It's a toddler trying to learn. It keeps trying thing after thing until it frustrates you into giving it the correct answer. Then it has it. Or it thinks it has it. How much AI/learning systems are going to be warped by malicious input? To me, that's a little scary.
Took an AI class recently to help understand what it is (AI is in fact a massive misnomer as others have said, it's really just complex programming with the ability to process massive levels of data). The thing that frightened me the most was the end when the presenter discussed the social risks of AI, and he is a leader in AI development and thought. The risk in his mind was that AI presented information that was was socially unacceptable and thus the algorithms needed to be modified to remove that. All that says to me is that someone gets to decide what is true and what isn't and has a means to prevent facts that don't align with their worldview from being known. No matter what side of the political spectrum you sit on, that should terrify you, because your side may control it today and you like it, but what happens when the other side gets control?

The more I learn about AI, the more I realize the only thing Orwell got wrong was the year.
 
Took an AI class recently to help understand what it is (AI is in fact a massive misnomer as others have said, it's really just complex programming with the ability to process massive levels of data). The thing that frightened me the most was the end when the presenter discussed the social risks of AI, and he is a leader in AI development and thought. The risk in his mind was that AI presented information that was was socially unacceptable and thus the algorithms needed to be modified to remove that. All that says to me is that someone gets to decide what is true and what isn't and has a means to prevent facts that don't align with their worldview from being known. No matter what side of the political spectrum you sit on, that should terrify you, because your side may control it today and you like it, but what happens when the other side gets control?

The more I learn about AI, the more I realize the only thing Orwell got wrong was the year.
The problem is that some stuff really IS dangerous. Not political, necessarily, but, say, bomb making. Or how to run a coordinated harassment campaign.
 
The problem is that some stuff really IS dangerous. Not political, necessarily, but, say, bomb making. Or how to run a coordinated harassment campaign.
The bigger threat is blurring the lines between what is actually dangerous and what is uncomfortable, thus making reality "dangerous" and thus subject to ban.

In your example, how to run a coordinated harassment campaign is a bad thing, OK, I'll concede that. But what's the difference between a coordinated harassment campaign and a coordinated social protest against political figures whose policies you disagree with. The answer is point of view and thus what I consider the real threat of AI, controlling the sources that it draws from to restrict what it "knows" based on point of view.
 
Last edited:
Back
Top