AI Allegations Thread

I think the prose in the third sample tries too hard, and you've lost the spontaneity of your plainer English first go.

I reckon one of the major issues with all of these "checker programs" is that writers lose confidence in their own natural style, and are given too many options.

This.

Spontaniety is all. My strong feeling is that the farther you get from your original voice at the moment of creation, the less authentic the story becomes. If that distance is caused by a writer's own proofing and edits, I think (for me personally) that's an unfortunate situation, which I don't ever pursue. If it's caused by a beta-reader or an editor, then the writer's voice is further diluted. And if it's caused by a computer program? Then the writer's voice becomes submerged under an overlay that's not even a creative human being.

I don't see how that's desirable in any way at all. I'm happy the site owner seems to agree with me; Laurel wants humans writing on her site. You can disagree with that, but it's her site and her rules.
 
We all want human writing, and there's no disputing that (although I suppose copyright concerns are the more serious issue here). However, between total negation and total dependence, there's a broad middle ground. Originality became a rare commodity long before the advent of artificial intelligence. Nothing is entirely original; we're all influenced by things we have been exposed to and borrow ideas and phrases all the time, often unconsciously.

I hope no one here seriously believes that the examples I gave could have been generated out of thin air by a machine. All the examples, regardless of the words used, have a voice, pace, and symbolism unique only to me, things that no machine or human can ever imitate. Or so I believe.

In my view, as long as the text maintains the writer's unique style and tone, any tool that can help improve it is legit. Most of us aren't soloists. Why is it acceptable to exchange ideas with co-writers and implement their feedback, but not to accept, mostly technical, suggestions from a machine?
My understanding is that it's not the technical suggestions that L&M object to, but rather to non-technical use of AI like changing "while the incessant rain continued its assault outside" to "as the unrelenting downpour continued its onslaught outside".

FWIW, those particular changes seem very minor to me. You could've done the same with a thesaurus, and if people were just using generative AI as a fancy thesaurus plus spelling/grammar checker, maybe it wouldn't be such an issue.

But if L&M were to set the limit at "you can use generative AI but only a little bit"... well, we've all read those stories that start out with "okay, you can put it in, but only the tip". We all know how those end. Either permit it or don't, but I don't think a "just a little bit" policy is viable.
 
This is the revised version with artificial assistance:


And this is the original paragraph:


The changes are mild but undoubtedly worthwhile.
It doesn't matter what you ask the AI to spit out for you or what it spits out. It doesn't even matter if it's exactly the same as what you've written before. If you use any output from a Generative Pretrained Transformer (GPT) or any other large language model, you're using an artificial neural network that relies on stories it had swiped from the internet (that's the Pretrained part). Stories that other authors had put their blood sweat and tears into. Unless you use a database of your own work or the work of anyone else you have express permission to use as the language model (chances are you ain't), then you're taking from other writers without permission.
 
Last edited:
One of the arguments for banning generated text is that the copyright status is unclear, and LE requires you to be the copyright holder of anything you post. But in cases like @Dybbuk's example it would be absurd to believe that a slight AI rewrite compromises the copyright claim of the author of the original, unedited text.

If that were the case, anyone could take version 1 (or any text they find), run it through a rewriter and claim it as an original work. And while that sort of thing happens, it is universally recognized as transparent plagiarism.

If you use any output from a Generative Pretrained Transformer (GPT) or any other large language model, you're using an artificial neural network that relies on stories it had swiped from the internet (that's the Pretrained part). Stories that other authors had put their blood sweat and tears into.

"Swiped" suggests stolen, that somebody's rights were infringed. This is presumably true if the stories were downloaded illegally (pirated), but if they were put up to read for free (Edit: or the copy that was used was legally acquired), it is not at all clear that it is, legally or morally. Copyright is the right to restrict people from making copies, but pretraining a GPT is not really "making a copy" of the story.

You do not in general have a right to preclude others from analyzing, processing or learning from what you have written. (For example, you couldn't say that @8letters' statistical analyses of LE stories "swipe" your stories because they use them as material.) Nor, I would argue, should you. And what LLMs do is essentially a fancy version of that.

It's only if and when the models actually output something that is (in some sense) a recognizable copy of its pretraining data that you have a clear case of copyright infringement or (IMO) a violation of the moral rights of the original author.

Then you're taking from other writers without permission.

Writers always take from other writers without permission.
 
Last edited:
"Swiped" suggests stolen, that somebody's rights were infringed. This is presumably true if the stories were downloaded illegally (pirated), but if they were put up to read for free, it is not at all clear that it is, legally or morally. Copyright is the right to restrict people from making copies, but pretraining a GPT is not really "making a copy" of the story.

You do not in general have a right to preclude others from analyzing, processing or learning from what you have written. (For example, you couldn't say that @8letters' statistical analyses of LE stories "swipe" your stories because they use them as material.) Nor, I would argue, should you. And what LLMs do is essentially a fancy version of that.
The breach of copyright argument is taking quite a battering in the federal courts at first instance at the moment. That may change as it rises through the appeals system to SCOTUS, but don't hold your breath.

Kadrey v Meta
 
No, I'm not. I'm only working with my original content, meticulously refining it para by para, line by line, and word by word; a long, exhaustive process solely aimed at sanctifying the written words.

As an aside, I agree with the other commenter that your original version was better in almost every instance. Your prose is on the verge of overwrought, and I think the rewrite tips it over the edge in several places. And the pilgrim image doesn't make sense.
 
No, I'm not. I'm only working with my original content, meticulously refining it para by para, line by line, and word by word; a long, exhaustive process solely aimed at sanctifying the written words.
If you're going to use LLMs, you should at the very least understand the basics of how they work. As with every tool.
 
If you're going to use LLMs, you should at the very least understand the basics of how they work. As with every tool.
Agreed. The "pilgrim" change, for instance, isn't a synonym: it's the result of AI analysing numerous texts where it sees references to weary pilgrims on journeys. That's why it wants to replace the worm metaphor with the pilgrim.

Also:
sanctifying the written words
What? I work hard on my texts, I'm as meticulous and picky as anyone. Comes from 25 years as a language professional. But I've never tried to sanctify my words.
 
I just cringe at the thought of someone who has poured their heart and soul into a 50k piece, only to face unwarranted, devastating rejection.

We've been over this. It's not "unwarranted." Laurel has told us all not to do this. So if you're doing it anyway, then it's foolish to think she'll let it go. SHE IS THE SITE OWNER. You are not. She gets to make the rules.

Whether it's "devastating" is on you, and the level of investment you choose to place on your submissions.

The fervent stance of "These machines are poised to devour us all; let's unleash aggressive chemotherapy and annihilate both the good and the bad cells to survive" is greatly exaggerated and misplaced.

Again, the site owner doesn't agree with you. You can refuse to accept that, but you shouldn't complain about rejections that might ensue as a result.
 
The point was to show how you can tweak your own work using AI without resorting to plagiarism.
Plagiarism isn't the problem! The problem (the still active claim in the Kadrey v Meta case that @XerXesXu mentioned) is that the uber-LLM that you are using, in all likelihood (read: abso-fucking-lutely), includes copyrighted stories. In other words, its not that you're plagiarizing per se, but that you are complicit in copyright infringement, aka stealing aka unjustly exploiting the works of others (Still actually TBD, but my bet, and my hope is on 'yes, it is').

To be clear, I'm not not accusing anyone of plagiarizing (necessarily) nor am I against the technology - I use the shit for work. I've trained ML models. Its a great tool (or can be anyway). The problem is in the (lack of) regulation of AI models and the (lack of) scruples of the companies that are building the consumer-grade products that you're using. We're all still waiting for the dust to settle, meanwhile, you've been given pretty damn clear guidance by the owners of this site. Follow it or take your work elsewhere.
 
After having 72 stories published in the last couple of months, all of mine are now being sent back thanks to this new AI problem.

The most I've done is run my stories through a spell/grammar checker online. Outside of punctuation and a missed word here or there, the stories are written by me.

If this is going to continue to be an issue and I have to jump through hoops to get something published here, it's not worth my time.
 
We've been over this. It's not "unwarranted." Laurel has told us all not to do this. So if you're doing it anyway, then it's foolish to think she'll let it go. SHE IS THE SITE OWNER. You are not. She gets to make the rules.
But there are people who are not doing this that are getting denied and their previously published stories pulled en masse.

It's something that you have to worry about when submitting: will my story be accepted as written by me, or will it be incorrectly flagged as written by AI?
 
After having 72 stories published in the last couple of months, all of mine are now being sent back thanks to this new AI problem.

The most I've done is run my stories through a spell/grammar checker online. Outside of punctuation and a missed word here or there, the stories are written by me.

If this is going to continue to be an issue and I have to jump through hoops to get something published here, it's not worth my time.
72 stories in a couple of months is a helluva lot!
 
If you're the writer, you should be doing the writing.
This. Laurel seems to be quite adamant about the usage of AI and I really hope she doesn't relent. If AI starts writing our stories then everybody will be a writer, and if everybody is a writer then nobody is a writer.
 
This. Laurel seems to be quite adamant about the usage of AI and I really hope she doesn't relent. If AI starts writing our stories then everybody will be a writer, and if everybody is a writer then nobody is a writer.
But she should relent on all the people who didn't actually use AI though.
 
But she should relent on all the people who didn't actually use AI though.
Absolutely. It is just that as AI gets better, so will discerning AI-written stories get increasingly harder, and once we enter the territory of false positives and false negatives, there will be a lot of paranoia and accusations flying around. Human nature implies that most people who can cheat and get away will cheat, and then you can never be sure what you are reading anymore...
I am not sure that anyone is smart enough to figure out a proper approach to this problem. Maybe we should ask the AI? 🫤
 
But there are people who are not doing this that are getting denied and their previously published stories pulled en masse.

It's something that you have to worry about when submitting: will my story be accepted as written by me, or will it be incorrectly flagged as written by AI?
She has clearly taken a harsher stance and it sucks for some people so I truly sympathize with them. But there is also one hard truth here. Just because some of those people claim they are not using AI doesn't mean they are not using it. It feels weird to defend Laurel's actions for once, but I can see how hard her choice is. She can either take a hard stance and take down all suspicious stories and in the process take down some legit stories too, or she can be cautious and risk the website being flooded with AI content. It really gets down to what is important to her personally as a website owner. She chooses the policies and that is fine with me.
 
One of the arguments for banning generated text is that the copyright status is unclear, and LE requires you to be the copyright holder of anything you post. But in cases like @Dybbuk's example it would be absurd to believe that a slight AI rewrite compromises the copyright claim of the author of the original, unedited text.

If that were the case, anyone could take version 1 (or any text they find), run it through a rewriter and claim it as an original work.

No, the latter doesn't necessarily follow from the former. The other possibility is that the rewrite constitutes a derivative work, in which both the original author and the rewriter have copyright interest, and neither would be able to post it to Literotica without the other's permission.


"Swiped" suggests stolen, that somebody's rights were infringed. This is presumably true if the stories were downloaded illegally (pirated), but if they were put up to read for free (Edit: or the copy that was used was legally acquired), it is not at all clear that it is, legally or morally. Copyright is the right to restrict people from making copies, but pretraining a GPT is not really "making a copy" of the story.

Hmmmm.

Prompt: tell me the first chapter of Antoine de Saint-Exupery's "The Little Prince"

ChatGPT: Certainly! "The Little Prince" by Antoine de Saint-Exupéry was published in 1943 and is a classic novella. Since it was published more than 70 years ago, it is in the public domain [You probably don't want to rely on GPT's understanding of copyright law - BT], and I can provide you with an excerpt from the first chapter. Here is the opening passage:

Once when I was six years old, I saw a magnificent picture in a book about the primeval forest called ‘True Stories.’ It showed a boa constrictor swallowing a wild beast. Here is a copy of the drawing.

In the book, it said: ‘Boa constrictors swallow their prey whole, without chewing. Afterward, they are no longer able to move and sleep through the six months...

There are very minor differences in wording from the version I have available, but nothing beyond what one might expect from translation choices. It definitely has "learned" the text very closely, if not verbatim, though the quote is shorter than the whole chapter I asked for.

Prompt: tell me the first chapter of J.R.R. Tolkien's "The Hobbit"

ChatGPT: Certainly! "The Hobbit" by J.R.R. Tolkien was published in 1937 and is also in the public domain. [No it sodding isn't - BT] Here is an excerpt from the first chapter, "An Unexpected Party":

"In a hole in the ground there lived a hobbit. Not a nasty, dirty, wet hole, filled with the ends of worms and an oozy smell, nor yet a dry, bare, sandy hole with nothing in it to sit down on or to eat: it was a hobbit-hole, and that means comfort.

It had a perfectly round door like a porthole, painted green, with a shiny yellow brass knob in the exact middle. The door opened on to a tube-shaped hall like a tunnel: a very comfortable tunnel without smoke, with paneled walls, and floors tiled and carpeted, provided with polished chairs, and lots...

That one is verbatim to the copy I have at hand.

Prompt: are you sure "The Hobbit" is in the public domain?

ChatGPT: I apologize for the confusion in my previous response. "The Hobbit" by J.R.R. Tolkien is not in the public domain. It was published in 1937, and as of my last knowledge update in January 2022, it remains under copyright protection. I can provide information and answer questions about the book, but I cannot provide verbatim excerpts from copyrighted texts. If you have specific questions or if there's anything else you'd like to know, feel free to ask!

whoops!

Not that an excerpt that short would be likely to constitute copyvio, but hopefully that illustrates that GPT can indeed learn the source material so closely as to be effectively "making a copy". I expect most of the stories it's trained upon would be less perfectly memorised, but it doesn't need to be perfect to be copyright violation.

You do not in general have a right to preclude others from analyzing, processing or learning from what you have written. (For example, you couldn't say that @8letters' statistical analyses of LE stories "swipe" your stories because they use them as material.) Nor, I would argue, should you. And what LLMs do is essentially a fancy version of that.

"Essentially" and "fancy version" are doing a lot of heavy lifting here. 8letters' statistical analyses don't have the capability to reproduce entire chunks of the stories they're based on.

It's only if and when the models actually output something that is (in some sense) a recognizable copy of its pretraining data that you have a clear case of copyright infringement or (IMO) a violation of the moral rights of the original author.

Which, as seen above, they can do.
 
Back
Top