The Future of Erotica: Exploring AI-Generated Content

"Sole owner of whatever portion anybody owns" and "full owner" are very different things though - it's quite possible to have content that nobody owns. This came up in the monkey selfie case you mentioned too: although it was determined that the monkey didn't own copyright in the photo, it didn't follow that the camera-owner did. Wikimedia took the position that the photo was public domain and treated it as such, the camera-owner disagreed and threatened legal action but that seems to have fizzled.

The problem that arises, if you interpret "full owner" in a strict way, is that in many, perhaps most cases, the author couldn't claim to be the author. It would lead to absurd results.

In almost any creative work subject to copyright that you can think of, there will be many elements of the work that the author can't claim ownership of, because they don't qualify as copyrightable subject matter. For instance, if I write a history book, and do extensive research using other books, and an editor, and a proofreading application, there will be elements of my work that are not protected: dates, facts, quotes from other works, photographs licensed from other works, the contributions of my editor and from my proofreading application. I cannot claim to own every single thing in my work. But there is no doubt that I'm entitled to register the copyright in my history book, and I can claim ownership subject to a disclaimer of anything I don't own.

The same principle holds for a Literotica story, even though the details are different. My story may be based on research of other stories, or perhaps relating to the setting or time period. Perhaps it's based on psychological works. Perhaps I had the help of an editor. As long as I'm not falsely claiming ownership of these things, which I cannot, it seems to me I should still be able to say I am the "full owner," because if the answer were otherwise it would be absurd. I don't see the use of AI as being any different, so long as its use does NOT involve the contribution of unattributed copyrightable subject matter to my story.

I agree as a practical matter nobody can be very confident about what the right answers are until the courts--or Laurel--have weighed in.
 
Tilan:
Grammar check: The problem that arises, if you interpret "full owner" in a strict way, is that in many, perhaps most cases, the author couldn't claim to be the author. It would lead to absurd results.

ChatGPT:
The following is a suggested revision of your text:
"If you interpret 'full owner' strictly, a problem arises in that in many, if not most cases, the author could not claim to be the author, leading to absurd results."

This is how I use it; maybe you should try it too.
I'm not going to use ChatGPT to revise social media posts. That would be a waste of time. I'm fine with my post the way it is.
 
I read the first two chapters. It was a bit simplistic and main points and ideas repeated too often. Also main character is allowed to leave her post openly taking an ancient artifact and manuscript. But hey it was fiction. It was better than I thought it would be but, "I asked AI to write a book"?

No, having read the research notes it was guided and and prompted throughout regarding development of ideas and continuity.
 
How about this?

Simon:
revise: "Susanna called to her son from her home office. Michael did not respond at once. He sat slouched on the sofa, on the other side of the house, playing Fortnite on the big-screen TV in the living room. It was late Sunday morning, and Michael didn't have to go to work."

ChatGPT:
Susanna's call echoed through the house, traveling from her home office to the living room where Michael lounged on the sofa, engrossed in a game of Fortnite on the big-screen TV. It was late Sunday morning, and he had no obligation to work.
:)
I wouldn't swap those. I see the logic of what ChatGPT is doing, but its version sacrifices the tone I wanted. "Susanna's call echoed through the house," in a way, elevates the prose, but by doing so makes it unnecessarily pretentious and also vague. That's not what I wanted to say. I think it's silly to say that a "call" "travels" from one room to the other. I prefer "slouched" to "lounged." I prefer "didn't have to go to work" to "had no obligation to work." The tone and word choice aren't right. I much prefer my version.

I have no doubt that, judiciously used, ChatGPT could be used to assist one in editing and coming up with better ways of saying things. But in my view this passage is not an example of that.
 
I wouldn't swap those. I see the logic of what ChatGPT is doing, but its version sacrifices the tone I wanted. "Susanna's call echoed through the house," in a way, elevates the prose, but by doing so makes it unnecessarily pretentious and also vague. That's not what I wanted to say. I think it's silly to say that a "call" "travels" from one room to the other. I prefer "slouched" to "lounged." I prefer "didn't have to go to work" to "had no obligation to work." The tone and word choice aren't right. I much prefer my version.

I have no doubt that, judiciously used, ChatGPT could be used to assist one in editing and coming up with better ways of saying things. But in my view this passage is not an example of that.
Agreed. The GPT version is overwritten; it's a technological form of writing by committee.
 
Agreed. The GPT version is overwritten; it's a technological form of writing by committee.
Exactly. A huge committee. It would suit 8letters down to the ground, and drive old KeithD insane.

Seems to me folk would be better off doing it themselves. It would appear you need to be an expert in prompts and repeats and go-arounds, and getting it do something over and over again. In the time it takes you to do all that, you could have written five-hundred of your own words (and still be a human being).
 
This issue isn't going to drive me insane no matter what happens with it. I'm old enough to ignore it without it impinging on anything I do.
 
Let me just toss this out here from a recent discussion I had on the subject:

Technologies that allow for spell check in applications like Word, auto-populate features in messaging apps, translations from one language to another, or even applications such as Grammarly that are utilized by numerous authors here, including myself. Aren't these all AI based tools that gleaned their content and training off established works such as previously created dictionaries?

Depending on how loosely one defines "AI", yes. The Literotica page on AI-generated works acknowledges those kinds of tools.

Isn't it the same basic technology which allows Grammarly to provide suggested context for your written words that allows text-generating AI to produce suggested content based upon your prompts?

Yes or no, depending on how reductionist one wants to get about it. There are similarities, but you can't use Grammarly to write a multi-paragraph story from a short prompt, so clearly they're not identical technologies. GPT's devs haven't been spending millions of dollars on compute time just to create something that replicates what tools like Grammarly already do.

But even if they were the same technology, "the same technology" isn't much of an argument. The same technology that lets an ambulance driver get somebody to hospital allows a terrorist to ram into a crowd, but that doesn't mean we have to accept both equally.

(Not suggesting that GPT is terrorism here, any more than fixing an essay with Grammarly is saving a life; just making the point that "same technology" isn't a valid argument.)

Isn't it your intellect that wrote the words that Grammarly provides suggested changes to. Isn't it your intellect that creates the prompts that a text generator uses to provide the requested content?

We use our intellects in many different ways. We can write text for Grammarly to adjust, we can write prompts for GPT, we can do a difficult exam or figure out ways to cheat in it, we can cure diseases or engineer new ones.

Different things are different, even when they have some elements in common.

Are we hypocrites to decry AI when it is a tool used by many of use on a daily basis in slightly different forms?

No, because this isn't "slightly different". It's a lot different, in ways which are obvious and important.
 
Well I'm more of an advocate for people practicing and developing traditional writing skills, but if someone wants to write with AI, and it's decent then knock yourself out. Some people will prefer that you point out that it's AI, and others might not like the idea. Also, people here have already looked into AIs like ChatGPT and others for potential as writing stories if you check the forums. Not to mention the shortcomings that AI still has for creativity and a unique writing voice.

Personally, I'm not really satisfied with AI's current writing capabilities for fiction, especially in the way I want to write my stories. ChatGPT, for example, is a little limited in its imagination: Sometimes it might just blast through the plot with a quick narration, skipping over dialogue. But its worst thing? It almost always tries to push for happy endings where everyone resolves their issues in the cleanest, nicest, most PG way possible as quickly as possible. I blame its established policies for generating safe content for its goodie two shoes behavior. There's a lot more but you get the point.


Also, I find AI writing, namely ChatGPT, to be better as an analysis tool to review what you already wrote. That's where its "perspectives" are a lot more useful in my opinion. That's where its expertise and knowledge in terms of technical writing, formulating an idea(for summarizing its own thoughts), or offering solutions for cleaner, more concise sentence structures are helpful. If you got that down, then it might not be so useful.

Even then, current AI suffers from occasional inaccurate and contradicting information when it comes to opinions and analysis, so sometimes it's not reliable.

Also, some AI, mainly the most popular ones don't allow erotic content use for their services. We do have a story idea forum for people who don't have the skills to write their story ideas down. So there are a lot of reasons why I don't want have it write my stories. Finally, learning how to write yourself and improve your craft is just fulfilling in its own way that having someone else, AI or even another person just doesn't scratch that itch the right way, personally.
I have been using chat GPT as an editor for months now. It does a good job of editing in the style you tell it to edit in. If you don't say 'edit for grammar' for eg. then it will rewrite your text block the way it sees fit. If you instruct it, it will just do as ordered and you won't suddenly lose a character or have a whole new event in the middle of your event.


I think artists were quick to snap on to AI image generators, but then that's how they make their living, which authors could argue as well. And now we have AI speech generators that can clone any voice they can 'hear' on any public forum, like YouTube and Facebook, Instagram, i dont know what else.

I saw a video today about voice cloning, so far as I can see AI has benefited scammers most of all, their traditionally badly spelled poor English now looks fantastic, and if that can hook a fish enough to phone the scammers, the victim will now hear an American voice when speaking to the scammers.
Fun times ahead. Im waiting for text/speech-to-image to come along, and there probably are a few already, but it will really take off when you can write or speak your own video. Imagine the erotica. šŸ˜‚
 
It's my intellect that makes me ask Stephen King to tell me a horror story (I'm his son in this example). Does that mean I created the horror story he tells me?
Only if your father (Steven King) has no intellect of his own and simply regurgitates the story content from other sources. There is a difference between creativity and compilation.
 
Only if your father (Steven King) has no intellect of his own and simply regurgitates the story content from other sources. There is a difference between creativity and compilation.
But I don't do anything more with one than with the other. So I don't think creating a prompt for AI is any more creative than giving Stephen King the same prompt, and reflects the same level of involvement on my part. Now, making a good story using AI would involve a lot more than writing one prompt. There is creativity in writing dozens of prompts, tweaking them, editing the various results, splicing them together, cutting parts out, rewording things. But just writing a prompt? "Write a story with beginning, middle, and end," is not creative output in my opinion, but it is a perfectly good prompt.
 
I have been using chat GPT as an editor for months now. It does a good job of editing in the style you tell it to edit in. If you don't say 'edit for grammar' for eg. then it will rewrite your text block the way it sees fit. If you instruct it, it will just do as ordered and you won't suddenly lose a character or have a whole new event in the middle of your event.


I think artists were quick to snap on to AI image generators, but then that's how they make their living, which authors could argue as well. And now we have AI speech generators that can clone any voice they can 'hear' on any public forum, like YouTube and Facebook, Instagram, i dont know what else.

I saw a video today about voice cloning, so far as I can see AI has benefited scammers most of all, their traditionally badly spelled poor English now looks fantastic, and if that can hook a fish enough to phone the scammers, the victim will now hear an American voice when speaking to the scammers.
Fun times ahead. Im waiting for text/speech-to-image to come along, and there probably are a few already, but it will really take off when you can write or speak your own video. Imagine the erotica. šŸ˜‚
I feel you on that. ChatGPT is a great editor, beta reader( for the most part) and a much more comprehensive thesaurus than google. I could ask it to come up with as many synomyms for a word as I want, or ask words for something by a definition or idea I give, and it'll come up with potential words. I've had it analyze writing and it can come up with a good summary of characters, their personalities, and motivations using context from the scene. I haven't used it for rewriting things (I have asked it for examples for a sentence or two from my writing where it could be more concise, and it's been helpful with its tips and feedback).

I've had it translate stuff to latin, sanskrit, and runic norse for fun. I've messed around with it and have had it come up with som of the dumbest, most ADHD inspired stories, and incoherent and conspiracy theories like birds being CIA drones.

You know, when you say voice cloning and erotica, I'm picturing a Morgan Freeman voice narrating some fanfiction. Someone is bound to do it. Or like that one video where Gilbert Gottfried read, in his Gilbert Gottfried voice, 50 Shades of Grey.
 
Read an article yesterday about AI, positing how reaching true intelligence, one cannot separate the mind from the body:

Can intelligence be separated from the body? https://www.nytimes.com/2023/04/11/...ytcore-ios-share&referringSource=articleShare

Theyā€™ve created Moxie, the first AI friend, who learns from the environment and can read human emotions through facial recognition, etc. and responds emphatically as it learns.

I am fascinated with the questions posed, that ChatGPT can only go so far since it is a disembodied mind so to speak. And the next wave will be AI with robotic bodies that live, interact and learn from the real world as we do - presupposing that an intelligent being pushes back on the world it lives in.

What interests me are the questions on the connection between the mind and the body, how true intelligence is relational. The latest studies have found that memory isnā€™t imprinted on the mind, but on the body - which for me is beautiful, exciting and scary at the same time. And how true intelligence cannot be separated from experience.

I think AI is a future that has already arrived, and it can be both exhilarating and frightening to think of what that future will be.
 
Researchers fed it the bar exam. That doesn't mean that it's licensed to practice law. https://www.iit.edu/news/gpt-4-passes-bar-exam

The AI representing a client was a stunt, and was pulled. https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/

The CBS article goes very easy on Browder/DoNotPay there. For a more critical take on it (may have to click through to see the tweets by Kathryn Tewson): https://threadreaderapp.com/thread/1620236530613911552.html

Among other things, it appears Browder reneged on a promise to donate to charity if people boosted his promotion - then when called on it, made a donation and apparently photoshopped the receipt image to misrepresent the date. Maybe not the kind of person I'd want to trust in a high-stakes case.
 
Read an article yesterday about AI, positing how reaching true intelligence, one cannot separate the mind from the body:

Can intelligence be separated from the body? https://www.nytimes.com/2023/04/11/...ytcore-ios-share&referringSource=articleShare

Paywalled, alas.

Theyā€™ve created Moxie, the first AI friend, who learns from the environment and can read human emotions through facial recognition, etc. and responds emphatically as it learns.

I am fascinated with the questions posed, that ChatGPT can only go so far since it is a disembodied mind so to speak. And the next wave will be AI with robotic bodies that live, interact and learn from the real world as we do - presupposing that an intelligent being pushes back on the world it lives in.

What interests me are the questions on the connection between the mind and the body, how true intelligence is relational. The latest studies have found that memory isnā€™t imprinted on the mind, but on the body - which for me is beautiful, exciting and scary at the same time. And how true intelligence cannot be separated from experience.

Got a reference for those studies?
 
Describing it as "the first AI friend" seems like a stretch. We've had plenty of previous robots that used computing technology to provide some degree of companionship - Aibo and Furbies spring to mind. No doubt Moxie is a lot more sophisticated, but that doesn't make it a "friend" in the way that a human, or even a cat or a dog, can be a friend.

In any case, I'm not really clear on how this relates to the rest of our discussion. Moxie is designed for more direct physical interaction with the world than GPT because that's what its purpose requires, not because this is an intrinsic requirement of "intelligence". Under the skin, those physical perceptions are converted into a stream of binary data before any serious "AI" takes place. The "mind" inside Moxie has no way of knowing whether it's in a real simulated body, or being fed sensory inputs directly from another computer.

A quick google search on body memory - here is one study, but I am sure there are a lot more:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9138975/

Hope this helps!

Nothing in that article supports the claim that "memory isn't imprinted on the mind but on the body". It refers to "body memories", but only in the sense of "memories of bodily experiences", not "memories imprinted on the body". To the contrary, it discusses these memories as things that are processed and stored in the brain, talking about the specific structures in the brain that are involved in storing such memories, and how damage to those parts of the brain can damage such memories.

(Of course, one could be pedantic here and say that the brain is part of the body. But when discussing issues of consciousness, "body" is often used in a way that excludes what's going on in the brain, and if an AI running on physical servers is "a disembodied mind" I assume we're following that latter usage.)
 
Describing it as "the first AI friend" seems like a stretch. We've had plenty of previous robots that used computing technology to provide some degree of companionship - Aibo and Furbies spring to mind. No doubt Moxie is a lot more sophisticated, but that doesn't make it a "friend" in the way that a human, or even a cat or a dog, can be a friend.

In any case, I'm not really clear on how this relates to the rest of our discussion. Moxie is designed for more direct physical interaction with the world than GPT because that's what its purpose requires, not because this is an intrinsic requirement of "intelligence". Under the skin, those physical perceptions are converted into a stream of binary data before any serious "AI" takes place. The "mind" inside Moxie has no way of knowing whether it's in a real simulated body, or being fed sensory inputs directly from another computer.



Nothing in that article supports the claim that "memory isn't imprinted on the mind but on the body". It refers to "body memories", but only in the sense of "memories of bodily experiences", not "memories imprinted on the body". To the contrary, it discusses these memories as things that are processed and stored in the brain, talking about the specific structures in the brain that are involved in storing such memories, and how damage to those parts of the brain can damage such memories.

(Of course, one could be pedantic here and say that the brain is part of the body. But when discussing issues of consciousness, "body" is often used in a way that excludes what's going on in the brain, and if an AI running on physical servers is "a disembodied mind" I assume we're following that latter usage.)
Youā€™re right. Iā€™m not a doctor, just a curious person following things that I find fascinating. I linked the article because it shows the evolution of ChatGPT4, Moxie is basically ChatGPT4 encased in a robot body. It is touted as the first AI friend by the company that invented it since it is the first AI (not that ChatGPT is actually true AI since it isnā€™t yet) but for the current definition of the word, it is the first AI that is implanted in a robot body - very different from furbies and the like.

I suppose my statements lacked context because the article was behind a paywall, the thread talks about the Future of Erotica: Exploring AI generated content. And seeing this is where the future is taking us - that now AI is being taught to learn from the physical world, not just language, what kind of content will it then generate? How will that affect what we do?

What I found most interesting about the article were questions on intelligence, and since this was something that was being discussed about the current ChatGPT, I thought it would be great to share. That AI learning from the real world would be a lot smarter, what could it then create?

Regarding the connection of the article to the discussion, I found it enlightening that they mentioned how intelligence is linked to the body - how it cannot be separated from it which is quite an idea to munch on - and since we write erotica, I found this understanding quite poignant as it relates to erotic intelligence.

As to the article linked about body memory and my statement, youā€™re right. It was the first I googled and was out of context. I also should have said - Memory is held not only in the mind, but in the body. Of course, yes, memory is processed in the mind, but what excites me is the idea that the body holds memory too, whether it be pleasure or pain, and exploring this theory both for creativity and therapy is exciting.

Iā€™m sorry if I posted something out of context with this thread, my mind flies in many directions all at once and sometimes, when things excite me, I just want to share it with others that may find it interesting too. Will be more thoughtful next time āœŒļø
 
Youā€™re right. Iā€™m not a doctor, just a curious person following things that I find fascinating. I linked the article because it shows the evolution of ChatGPT4, Moxie is basically ChatGPT4 encased in a robot body.

It's... not, though? The dates make it impossible for it to have been built on GPT-4 (Moxie was commercially available in 2022, GPT-4 was only released last week) and as far as I can tell from the website it doesn't have anything to do with any version of GPT.

According to Moxie's website, it uses a proprietary platform named "SocialX". The website isn't very informative about just how SocialX works, but if it does what they say it does, it's not GPT. In particular, they claim it has the capacity to learn faces etc. while in use, which implies training while in use; that's not how GPT works. (In fact, the "P" in the acronym stands for "pre-trained".)

(Now that I have looked through at Moxie's site and one of the linked publications, I have some serious concerns about the product and in particular the way it's being marketed for use with autistic children. But those concerns are mostly tangential to the Literotica discussion here, so I won't derail by opening that can of worms.)

It is touted as the first AI friend by the company that invented it since it is the first AI (not that ChatGPT is actually true AI since it isnā€™t yet) but for the current definition of the word, it is the first AI that is implanted in a robot body - very different from furbies and the like.

I mentioned the Sony AIBO previously. It was a "robot dog" toy that used voice recognition and other technologies for interactive behaviour. It originally came out in 1998, was retired for a while, and a couple of years back Sony announced a new version "capable of forming an emotional bond with users".

The name stands for "Artificial Intelligence roBOt" and is also a play on a Japanese word for "Friend".

Was the AIBO really "AI"? Debatable. The term has become virtually meaningless under the weight of tech marketing bros who use it to plug anything that has a chip in it, and I think calling AIBO "AI" would be very generous.

But...

If you look at the "Science Behind Moxie" page on their website, their founder and CEO, Paolo Pirjanian, previously "helped create technologies for many products ranging from the Sony AIBO to the iRobot Roomba".

I feel like if he's already claiming credit for contributing to a robot toy marketed as an "AI friend", he doesn't get to release another robot toy and claim it's "the first AI friend". That seems a bit shady, no? If he's going to do that, he could at least have the decency to pretend the previous time didn't happen.

I suppose my statements lacked context because the article was behind a paywall, the thread talks about the Future of Erotica: Exploring AI generated content. And seeing this is where the future is taking us - that now AI is being taught to learn from the physical world, not just language, what kind of content will it then generate? How will that affect what we do?

FWIW, AI has been learning from the physical world for a long time - applications like computer driving and locomotion have been major areas of study.

What I found most interesting about the article were questions on intelligence, and since this was something that was being discussed about the current ChatGPT, I thought it would be great to share. That AI learning from the real world would be a lot smarter, what could it then create?

Hard to discuss without having a working definition of "intelligence". (And please note that some of the ones currently in vogue in AI circles have rather unsavoury origins.)

As I mentioned earlier in the thread, all the "AIs" currently extant are highly specialised to one task or another. GPT's later iterations have had a little bit more flexibility than most, but it's still very narrow compared to the kind of flexibility we find in a human.

An AI that learns from the real world will be "smart" at real-world tasks of the kind that it's been trained on, and it will almost certainly be very very dumb in anything else. If we could figure out how to build an AI capable of really integrating its knowledge across multiple fields, then having experience with physical phenomena probably would help it become a better writer. But none of the ones we have are close to that yet, AFAIK.
 
Maybe it will all fall apart when the AI's start 'learning' from other AI's. A form of Reductio ad absurdum?
 
Maybe it will all fall apart when the AI's start 'learning' from other AI's. A form of Reductio ad absurdum?
Or even from themselves. That's likely to become a real problem when new versions of GPT/etc. depend on the Internet for training material but more and more pages on the Internet are written by the older versions.
 
Or even from themselves. That's likely to become a real problem when new versions of GPT/etc. depend on the Internet for training material but more and more pages on the Internet are written by the older versions.
Yes - but won't the logic fails and bland repetition become even more obvious? That's the current give-away for me, that sense of sameness, the constant repetition of phrases that sound like extracts from Marketing 101 or Corporate Guff, Notes for Beginners.
 
Yes - but won't the logic fails and bland repetition become even more obvious? That's the current give-away for me, that sense of sameness, the constant repetition of phrases that sound like extracts from Marketing 101 or Corporate Guff, Notes for Beginners.
Yeah, that's kind of what I am thinking; as if the whole structure will implode on itself. Or not...šŸ˜¬
 
Back
Top